Feb 23 12:58:31.999745 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 12:58:32.666969 master-0 kubenswrapper[4072]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 12:58:32.668765 master-0 kubenswrapper[4072]: I0223 12:58:32.667103 4072 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 12:58:32.678110 master-0 kubenswrapper[4072]: W0223 12:58:32.678024 4072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:58:32.678110 master-0 kubenswrapper[4072]: W0223 12:58:32.678072 4072 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:58:32.678110 master-0 kubenswrapper[4072]: W0223 12:58:32.678087 4072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:58:32.678110 master-0 kubenswrapper[4072]: W0223 12:58:32.678099 4072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:58:32.678110 master-0 kubenswrapper[4072]: W0223 12:58:32.678121 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678134 4072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678145 4072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678156 4072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678166 4072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678175 4072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678185 4072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678196 4072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678211 4072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678224 4072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678236 4072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678276 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678287 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678299 4072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678310 4072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678319 4072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678327 4072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678335 4072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678344 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:58:32.678483 master-0 kubenswrapper[4072]: W0223 12:58:32.678353 4072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678361 4072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678369 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678377 4072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678385 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678394 4072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678402 4072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678413 4072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678422 4072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678430 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678441 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678451 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678458 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678467 4072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678474 4072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678487 4072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678496 4072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678504 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678512 4072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:58:32.679368 master-0 kubenswrapper[4072]: W0223 12:58:32.678529 4072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678537 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678546 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678554 4072 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678561 4072 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678569 4072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678576 4072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678584 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678592 4072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678600 4072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678610 4072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678619 4072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678628 4072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678640 4072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678650 4072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678658 4072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678665 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678673 4072 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678681 4072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:58:32.680507 master-0 kubenswrapper[4072]: W0223 12:58:32.678688 4072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678696 4072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678703 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678711 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678720 4072 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678728 4072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678735 4072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678742 4072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.678836 4072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.679517 4072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: W0223 12:58:32.679530 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680494 4072 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680814 4072 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680923 4072 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680942 4072 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680959 4072 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680973 4072 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.680989 4072 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.681006 4072 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.681019 4072 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.681043 4072 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.681056 4072 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 12:58:32.681563 master-0 kubenswrapper[4072]: I0223 12:58:32.681069 4072 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681082 4072 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681095 4072 flags.go:64] FLAG: --cgroup-root="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681109 4072 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681122 4072 flags.go:64] FLAG: --client-ca-file="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681133 4072 flags.go:64] FLAG: --cloud-config="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681154 4072 flags.go:64] FLAG: --cloud-provider="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681167 4072 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681184 4072 flags.go:64] FLAG: --cluster-domain="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681422 4072 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681830 4072 flags.go:64] FLAG: --config-dir="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681872 4072 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681890 4072 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681910 4072 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681925 4072 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681937 4072 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681947 4072 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681957 4072 flags.go:64] FLAG: --contention-profiling="false" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681967 4072 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681978 4072 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.681990 4072 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.682000 4072 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.682013 4072 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.682026 4072 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.682036 4072 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 12:58:32.682802 master-0 kubenswrapper[4072]: I0223 12:58:32.682046 4072 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682055 4072 flags.go:64] FLAG: --enable-server="true" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682068 4072 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682088 4072 flags.go:64] FLAG: --event-burst="100" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682099 4072 flags.go:64] FLAG: --event-qps="50" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682109 4072 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682119 4072 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682127 4072 flags.go:64] FLAG: --eviction-hard="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682167 4072 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682176 4072 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682186 4072 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682196 4072 flags.go:64] FLAG: --eviction-soft="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682206 4072 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682215 4072 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682224 4072 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682234 4072 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682277 4072 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682288 4072 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682299 4072 flags.go:64] FLAG: --feature-gates="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682315 4072 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682327 4072 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682339 4072 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682352 4072 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682366 4072 flags.go:64] FLAG: --healthz-port="10248" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682377 4072 flags.go:64] FLAG: --help="false" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682386 4072 flags.go:64] FLAG: --hostname-override="" Feb 23 12:58:32.684058 master-0 kubenswrapper[4072]: I0223 12:58:32.682396 4072 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682406 4072 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682415 4072 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682424 4072 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682433 4072 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682442 4072 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682451 4072 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682460 4072 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682470 4072 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682479 4072 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682490 4072 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682499 4072 flags.go:64] FLAG: --kube-reserved="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682509 4072 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682519 4072 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682530 4072 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682539 4072 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682550 4072 flags.go:64] FLAG: --lock-file="" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682558 4072 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682568 4072 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682578 4072 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682594 4072 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682603 4072 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682613 4072 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682622 4072 flags.go:64] FLAG: --logging-format="text" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682632 4072 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 12:58:32.685390 master-0 kubenswrapper[4072]: I0223 12:58:32.682642 4072 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682651 4072 flags.go:64] FLAG: --manifest-url="" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682660 4072 flags.go:64] FLAG: --manifest-url-header="" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682675 4072 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682684 4072 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682696 4072 flags.go:64] FLAG: --max-pods="110" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682706 4072 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682715 4072 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682724 4072 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682733 4072 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682743 4072 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682753 4072 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682763 4072 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682793 4072 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682802 4072 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682812 4072 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682823 4072 flags.go:64] FLAG: --pod-cidr="" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682832 4072 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682845 4072 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682854 4072 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682866 4072 flags.go:64] FLAG: --pods-per-core="0" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682875 4072 flags.go:64] FLAG: --port="10250" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682885 4072 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682894 4072 flags.go:64] FLAG: --provider-id="" Feb 23 12:58:32.686550 master-0 kubenswrapper[4072]: I0223 12:58:32.682902 4072 flags.go:64] FLAG: --qos-reserved="" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682911 4072 flags.go:64] FLAG: --read-only-port="10255" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682921 4072 flags.go:64] FLAG: --register-node="true" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682930 4072 flags.go:64] FLAG: --register-schedulable="true" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682938 4072 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682955 4072 flags.go:64] FLAG: --registry-burst="10" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682977 4072 flags.go:64] FLAG: --registry-qps="5" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682986 4072 flags.go:64] FLAG: --reserved-cpus="" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.682995 4072 flags.go:64] FLAG: --reserved-memory="" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683007 4072 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683016 4072 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683025 4072 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683035 4072 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683044 4072 flags.go:64] FLAG: --runonce="false" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683052 4072 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683062 4072 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683071 4072 flags.go:64] FLAG: --seccomp-default="false" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683080 4072 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683089 4072 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683098 4072 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683108 4072 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683117 4072 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683126 4072 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683136 4072 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683145 4072 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 12:58:32.687662 master-0 kubenswrapper[4072]: I0223 12:58:32.683154 4072 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683164 4072 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683173 4072 flags.go:64] FLAG: --system-cgroups="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683183 4072 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683198 4072 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683207 4072 flags.go:64] FLAG: --tls-cert-file="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683216 4072 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683230 4072 flags.go:64] FLAG: --tls-min-version="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683239 4072 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683275 4072 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683284 4072 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683293 4072 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683302 4072 flags.go:64] FLAG: --v="2" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683318 4072 flags.go:64] FLAG: --version="false" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683330 4072 flags.go:64] FLAG: --vmodule="" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683341 4072 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: I0223 12:58:32.683351 4072 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683622 4072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683634 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683643 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683651 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683661 4072 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683670 4072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683678 4072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:58:32.688948 master-0 kubenswrapper[4072]: W0223 12:58:32.683686 4072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683694 4072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683702 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683710 4072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683718 4072 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683726 4072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683734 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683742 4072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683749 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683757 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683766 4072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683774 4072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683782 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683789 4072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683797 4072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683805 4072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683813 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683823 4072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683833 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683841 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:58:32.690238 master-0 kubenswrapper[4072]: W0223 12:58:32.683849 4072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683860 4072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683867 4072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683878 4072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683889 4072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683897 4072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683906 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683916 4072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683924 4072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683934 4072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683943 4072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683952 4072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683959 4072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683967 4072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683975 4072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683986 4072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.683996 4072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.684005 4072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:58:32.691175 master-0 kubenswrapper[4072]: W0223 12:58:32.684015 4072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684025 4072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684034 4072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684042 4072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684051 4072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684060 4072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684069 4072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684079 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684089 4072 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684099 4072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684107 4072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684116 4072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684124 4072 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684133 4072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684141 4072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684151 4072 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684160 4072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684167 4072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684175 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684183 4072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:58:32.692022 master-0 kubenswrapper[4072]: W0223 12:58:32.684191 4072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684199 4072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684207 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684215 4072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684272 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684283 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: W0223 12:58:32.684290 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:58:32.693004 master-0 kubenswrapper[4072]: I0223 12:58:32.684322 4072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:58:32.696729 master-0 kubenswrapper[4072]: I0223 12:58:32.696668 4072 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 12:58:32.696729 master-0 kubenswrapper[4072]: I0223 12:58:32.696711 4072 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 12:58:32.696869 master-0 kubenswrapper[4072]: W0223 12:58:32.696839 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:58:32.696869 master-0 kubenswrapper[4072]: W0223 12:58:32.696854 4072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:58:32.696869 master-0 kubenswrapper[4072]: W0223 12:58:32.696865 4072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696875 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696886 4072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696895 4072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696904 4072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696913 4072 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696921 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696930 4072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696938 4072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696946 4072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696954 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696963 4072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696971 4072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696979 4072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696987 4072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.696995 4072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.697002 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.697011 4072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.697020 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.697031 4072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:58:32.697007 master-0 kubenswrapper[4072]: W0223 12:58:32.697041 4072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697050 4072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697061 4072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697071 4072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697081 4072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697089 4072 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697098 4072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697106 4072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697114 4072 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697122 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697130 4072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697138 4072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697148 4072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697158 4072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697167 4072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697198 4072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697206 4072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697214 4072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697222 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:58:32.698217 master-0 kubenswrapper[4072]: W0223 12:58:32.697231 4072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697239 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697272 4072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697282 4072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697292 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697300 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697308 4072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697315 4072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697323 4072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697331 4072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697339 4072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697348 4072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697356 4072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697364 4072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697372 4072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697384 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697392 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697399 4072 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697407 4072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697416 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:58:32.699667 master-0 kubenswrapper[4072]: W0223 12:58:32.697424 4072 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697433 4072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697442 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697450 4072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697458 4072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697466 4072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697474 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697482 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697489 4072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697497 4072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697505 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: I0223 12:58:32.697519 4072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697811 4072 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697824 4072 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697833 4072 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 12:58:32.701238 master-0 kubenswrapper[4072]: W0223 12:58:32.697842 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697850 4072 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697859 4072 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697867 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697876 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697886 4072 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697894 4072 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697903 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697911 4072 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697922 4072 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697932 4072 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697942 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697952 4072 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697961 4072 feature_gate.go:330] unrecognized feature gate: Example Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697970 4072 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697978 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697989 4072 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.697999 4072 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.698007 4072 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.698016 4072 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 12:58:32.702560 master-0 kubenswrapper[4072]: W0223 12:58:32.698025 4072 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698034 4072 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698045 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698053 4072 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698061 4072 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698069 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698078 4072 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698086 4072 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698094 4072 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698101 4072 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698108 4072 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698116 4072 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698125 4072 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698132 4072 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698140 4072 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698148 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698155 4072 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698164 4072 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698171 4072 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698179 4072 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 12:58:32.703901 master-0 kubenswrapper[4072]: W0223 12:58:32.698186 4072 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698194 4072 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698202 4072 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698210 4072 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698218 4072 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698227 4072 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698235 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698270 4072 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698279 4072 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698286 4072 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698294 4072 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698302 4072 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698312 4072 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698320 4072 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698327 4072 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698335 4072 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698343 4072 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698352 4072 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698364 4072 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698373 4072 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 12:58:32.704842 master-0 kubenswrapper[4072]: W0223 12:58:32.698380 4072 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698388 4072 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698395 4072 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698403 4072 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698411 4072 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698419 4072 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698429 4072 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698439 4072 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: W0223 12:58:32.698448 4072 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.698460 4072 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.698724 4072 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.702003 4072 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.704486 4072 server.go:997] "Starting client certificate rotation" Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.704524 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 12:58:32.705780 master-0 kubenswrapper[4072]: I0223 12:58:32.704787 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:58:32.737335 master-0 kubenswrapper[4072]: I0223 12:58:32.737217 4072 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 12:58:32.740689 master-0 kubenswrapper[4072]: E0223 12:58:32.740616 4072 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:32.743325 master-0 kubenswrapper[4072]: I0223 12:58:32.743226 4072 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 12:58:32.759963 master-0 kubenswrapper[4072]: I0223 12:58:32.759885 4072 log.go:25] "Validated CRI v1 runtime API" Feb 23 12:58:32.766199 master-0 kubenswrapper[4072]: I0223 12:58:32.766137 4072 log.go:25] "Validated CRI v1 image API" Feb 23 12:58:32.768498 master-0 kubenswrapper[4072]: I0223 12:58:32.768451 4072 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 12:58:32.776470 master-0 kubenswrapper[4072]: I0223 12:58:32.776390 4072 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 a0645d8c-797c-4e96-9069-34c436b1201e:/dev/vda3] Feb 23 12:58:32.776600 master-0 kubenswrapper[4072]: I0223 12:58:32.776451 4072 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 23 12:58:32.811913 master-0 kubenswrapper[4072]: I0223 12:58:32.811385 4072 manager.go:217] Machine: {Timestamp:2026-02-23 12:58:32.808115446 +0000 UTC m=+0.618272128 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1f5e0293a13e4ebabb9c281fe953e842 SystemUUID:1f5e0293-a13e-4eba-bb9c-281fe953e842 BootID:08350faf-787c-4da6-a444-e23ed90f1388 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fe:58:4c Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:42:b9:27:f4:5e:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 12:58:32.811913 master-0 kubenswrapper[4072]: I0223 12:58:32.811814 4072 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 12:58:32.812183 master-0 kubenswrapper[4072]: I0223 12:58:32.811988 4072 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 12:58:32.813724 master-0 kubenswrapper[4072]: I0223 12:58:32.813665 4072 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 12:58:32.814075 master-0 kubenswrapper[4072]: I0223 12:58:32.814001 4072 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 12:58:32.814501 master-0 kubenswrapper[4072]: I0223 12:58:32.814059 4072 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 12:58:32.814501 master-0 kubenswrapper[4072]: I0223 12:58:32.814498 4072 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 12:58:32.814654 master-0 kubenswrapper[4072]: I0223 12:58:32.814517 4072 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 12:58:32.815562 master-0 kubenswrapper[4072]: I0223 12:58:32.815507 4072 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 12:58:32.815562 master-0 kubenswrapper[4072]: I0223 12:58:32.815558 4072 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 12:58:32.815793 master-0 kubenswrapper[4072]: I0223 12:58:32.815749 4072 state_mem.go:36] "Initialized new in-memory state store" Feb 23 12:58:32.816340 master-0 kubenswrapper[4072]: I0223 12:58:32.816302 4072 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 12:58:32.821700 master-0 kubenswrapper[4072]: I0223 12:58:32.821650 4072 kubelet.go:418] "Attempting to sync node with API server" Feb 23 12:58:32.821700 master-0 kubenswrapper[4072]: I0223 12:58:32.821692 4072 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 12:58:32.821853 master-0 kubenswrapper[4072]: I0223 12:58:32.821770 4072 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 12:58:32.821853 master-0 kubenswrapper[4072]: I0223 12:58:32.821791 4072 kubelet.go:324] "Adding apiserver pod source" Feb 23 12:58:32.821853 master-0 kubenswrapper[4072]: I0223 12:58:32.821846 4072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 12:58:32.831498 master-0 kubenswrapper[4072]: W0223 12:58:32.831354 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:32.831498 master-0 kubenswrapper[4072]: W0223 12:58:32.831391 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:32.831661 master-0 kubenswrapper[4072]: E0223 12:58:32.831498 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:32.831661 master-0 kubenswrapper[4072]: E0223 12:58:32.831513 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:32.833149 master-0 kubenswrapper[4072]: I0223 12:58:32.833073 4072 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 12:58:32.836270 master-0 kubenswrapper[4072]: I0223 12:58:32.836199 4072 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 12:58:32.836521 master-0 kubenswrapper[4072]: I0223 12:58:32.836470 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 12:58:32.836521 master-0 kubenswrapper[4072]: I0223 12:58:32.836511 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 12:58:32.836521 master-0 kubenswrapper[4072]: I0223 12:58:32.836527 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836552 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836566 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836588 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836601 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836622 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836655 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836676 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 12:58:32.836727 master-0 kubenswrapper[4072]: I0223 12:58:32.836700 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 12:58:32.837461 master-0 kubenswrapper[4072]: I0223 12:58:32.837409 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 12:58:32.839820 master-0 kubenswrapper[4072]: I0223 12:58:32.839765 4072 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 12:58:32.840543 master-0 kubenswrapper[4072]: I0223 12:58:32.840499 4072 server.go:1280] "Started kubelet" Feb 23 12:58:32.842802 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 12:58:32.842968 master-0 kubenswrapper[4072]: I0223 12:58:32.842186 4072 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 12:58:32.843030 master-0 kubenswrapper[4072]: I0223 12:58:32.842186 4072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 12:58:32.843567 master-0 kubenswrapper[4072]: I0223 12:58:32.843480 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:32.843828 master-0 kubenswrapper[4072]: I0223 12:58:32.843772 4072 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 12:58:32.844615 master-0 kubenswrapper[4072]: I0223 12:58:32.844537 4072 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 12:58:32.845048 master-0 kubenswrapper[4072]: I0223 12:58:32.845000 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 12:58:32.845115 master-0 kubenswrapper[4072]: I0223 12:58:32.845051 4072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 12:58:32.845402 master-0 kubenswrapper[4072]: I0223 12:58:32.845353 4072 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 12:58:32.845402 master-0 kubenswrapper[4072]: I0223 12:58:32.845401 4072 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 12:58:32.845514 master-0 kubenswrapper[4072]: E0223 12:58:32.845402 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:58:32.845514 master-0 kubenswrapper[4072]: I0223 12:58:32.845464 4072 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 12:58:32.851176 master-0 kubenswrapper[4072]: I0223 12:58:32.845592 4072 reconstruct.go:97] "Volume reconstruction finished" Feb 23 12:58:32.851176 master-0 kubenswrapper[4072]: I0223 12:58:32.851146 4072 reconciler.go:26] "Reconciler: start to sync state" Feb 23 12:58:32.855372 master-0 kubenswrapper[4072]: W0223 12:58:32.854887 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:32.855372 master-0 kubenswrapper[4072]: E0223 12:58:32.855088 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:32.855716 master-0 kubenswrapper[4072]: I0223 12:58:32.855482 4072 factory.go:55] Registering systemd factory Feb 23 12:58:32.855716 master-0 kubenswrapper[4072]: I0223 12:58:32.855513 4072 factory.go:221] Registration of the systemd container factory successfully Feb 23 12:58:32.856186 master-0 kubenswrapper[4072]: E0223 12:58:32.856098 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 23 12:58:32.857013 master-0 kubenswrapper[4072]: I0223 12:58:32.856935 4072 factory.go:153] Registering CRI-O factory Feb 23 12:58:32.857013 master-0 kubenswrapper[4072]: I0223 12:58:32.856999 4072 factory.go:221] Registration of the crio container factory successfully Feb 23 12:58:32.857198 master-0 kubenswrapper[4072]: I0223 12:58:32.857158 4072 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 12:58:32.857313 master-0 kubenswrapper[4072]: I0223 12:58:32.857265 4072 factory.go:103] Registering Raw factory Feb 23 12:58:32.857416 master-0 kubenswrapper[4072]: I0223 12:58:32.857362 4072 manager.go:1196] Started watching for new ooms in manager Feb 23 12:58:32.862481 master-0 kubenswrapper[4072]: I0223 12:58:32.862428 4072 server.go:449] "Adding debug handlers to kubelet server" Feb 23 12:58:32.864508 master-0 kubenswrapper[4072]: E0223 12:58:32.864439 4072 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 12:58:32.864685 master-0 kubenswrapper[4072]: E0223 12:58:32.862639 4072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1896e1903197c319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,LastTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:32.864984 master-0 kubenswrapper[4072]: I0223 12:58:32.864915 4072 manager.go:319] Starting recovery of all containers Feb 23 12:58:32.890759 master-0 kubenswrapper[4072]: I0223 12:58:32.890680 4072 manager.go:324] Recovery completed Feb 23 12:58:32.908988 master-0 kubenswrapper[4072]: I0223 12:58:32.908910 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:32.912792 master-0 kubenswrapper[4072]: I0223 12:58:32.912637 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:32.913036 master-0 kubenswrapper[4072]: I0223 12:58:32.912984 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:32.913134 master-0 kubenswrapper[4072]: I0223 12:58:32.913069 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:32.916427 master-0 kubenswrapper[4072]: I0223 12:58:32.916369 4072 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 12:58:32.916427 master-0 kubenswrapper[4072]: I0223 12:58:32.916408 4072 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 12:58:32.916581 master-0 kubenswrapper[4072]: I0223 12:58:32.916477 4072 state_mem.go:36] "Initialized new in-memory state store" Feb 23 12:58:32.920011 master-0 kubenswrapper[4072]: I0223 12:58:32.919828 4072 policy_none.go:49] "None policy: Start" Feb 23 12:58:32.920795 master-0 kubenswrapper[4072]: I0223 12:58:32.920737 4072 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 12:58:32.920795 master-0 kubenswrapper[4072]: I0223 12:58:32.920782 4072 state_mem.go:35] "Initializing new in-memory state store" Feb 23 12:58:32.946185 master-0 kubenswrapper[4072]: E0223 12:58:32.946135 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:58:33.003007 master-0 kubenswrapper[4072]: I0223 12:58:33.002961 4072 manager.go:334] "Starting Device Plugin manager" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.003038 4072 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.003061 4072 server.go:79] "Starting device plugin registration server" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.003735 4072 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.004177 4072 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.004462 4072 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.004673 4072 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.004688 4072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: E0223 12:58:33.006283 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.025415 4072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.028091 4072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.028139 4072 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: I0223 12:58:33.028166 4072 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: E0223 12:58:33.028219 4072 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: W0223 12:58:33.030163 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:33.039276 master-0 kubenswrapper[4072]: E0223 12:58:33.030328 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:33.058355 master-0 kubenswrapper[4072]: E0223 12:58:33.058287 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 23 12:58:33.104624 master-0 kubenswrapper[4072]: I0223 12:58:33.104550 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.107277 master-0 kubenswrapper[4072]: I0223 12:58:33.106562 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.107277 master-0 kubenswrapper[4072]: I0223 12:58:33.106611 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.107277 master-0 kubenswrapper[4072]: I0223 12:58:33.106628 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.107277 master-0 kubenswrapper[4072]: I0223 12:58:33.106674 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:33.108030 master-0 kubenswrapper[4072]: E0223 12:58:33.107740 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:33.129352 master-0 kubenswrapper[4072]: I0223 12:58:33.128794 4072 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 12:58:33.129352 master-0 kubenswrapper[4072]: I0223 12:58:33.128890 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.130338 master-0 kubenswrapper[4072]: I0223 12:58:33.130240 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.130338 master-0 kubenswrapper[4072]: I0223 12:58:33.130318 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.130338 master-0 kubenswrapper[4072]: I0223 12:58:33.130334 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.130545 master-0 kubenswrapper[4072]: I0223 12:58:33.130483 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.130976 master-0 kubenswrapper[4072]: I0223 12:58:33.130920 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.131050 master-0 kubenswrapper[4072]: I0223 12:58:33.131004 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.131736 master-0 kubenswrapper[4072]: I0223 12:58:33.131692 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.131736 master-0 kubenswrapper[4072]: I0223 12:58:33.131736 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.132130 master-0 kubenswrapper[4072]: I0223 12:58:33.131754 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.132130 master-0 kubenswrapper[4072]: I0223 12:58:33.131967 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.132280 master-0 kubenswrapper[4072]: I0223 12:58:33.132123 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.132280 master-0 kubenswrapper[4072]: I0223 12:58:33.132173 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.132280 master-0 kubenswrapper[4072]: I0223 12:58:33.132161 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.132280 master-0 kubenswrapper[4072]: I0223 12:58:33.132281 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.132564 master-0 kubenswrapper[4072]: I0223 12:58:33.132303 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.133392 master-0 kubenswrapper[4072]: I0223 12:58:33.133279 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.133392 master-0 kubenswrapper[4072]: I0223 12:58:33.133332 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.133392 master-0 kubenswrapper[4072]: I0223 12:58:33.133352 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.133717 master-0 kubenswrapper[4072]: I0223 12:58:33.133295 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.133717 master-0 kubenswrapper[4072]: I0223 12:58:33.133454 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.133717 master-0 kubenswrapper[4072]: I0223 12:58:33.133471 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.133717 master-0 kubenswrapper[4072]: I0223 12:58:33.133627 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.133989 master-0 kubenswrapper[4072]: I0223 12:58:33.133864 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.133989 master-0 kubenswrapper[4072]: I0223 12:58:33.133929 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.134902 master-0 kubenswrapper[4072]: I0223 12:58:33.134762 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.134902 master-0 kubenswrapper[4072]: I0223 12:58:33.134847 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.134902 master-0 kubenswrapper[4072]: I0223 12:58:33.134876 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.135146 master-0 kubenswrapper[4072]: I0223 12:58:33.135073 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.135146 master-0 kubenswrapper[4072]: I0223 12:58:33.135116 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.135337 master-0 kubenswrapper[4072]: I0223 12:58:33.135127 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.135337 master-0 kubenswrapper[4072]: I0223 12:58:33.135290 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.135337 master-0 kubenswrapper[4072]: I0223 12:58:33.135339 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.135582 master-0 kubenswrapper[4072]: I0223 12:58:33.135292 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.136651 master-0 kubenswrapper[4072]: I0223 12:58:33.136592 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.136651 master-0 kubenswrapper[4072]: I0223 12:58:33.136608 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.136651 master-0 kubenswrapper[4072]: I0223 12:58:33.136653 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.136982 master-0 kubenswrapper[4072]: I0223 12:58:33.136682 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.136982 master-0 kubenswrapper[4072]: I0223 12:58:33.136687 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.136982 master-0 kubenswrapper[4072]: I0223 12:58:33.136709 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.137216 master-0 kubenswrapper[4072]: I0223 12:58:33.137098 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.137216 master-0 kubenswrapper[4072]: I0223 12:58:33.137136 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.138135 master-0 kubenswrapper[4072]: I0223 12:58:33.138071 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.138135 master-0 kubenswrapper[4072]: I0223 12:58:33.138113 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.138135 master-0 kubenswrapper[4072]: I0223 12:58:33.138130 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.154188 master-0 kubenswrapper[4072]: I0223 12:58:33.154128 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.154188 master-0 kubenswrapper[4072]: I0223 12:58:33.154186 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.154506 master-0 kubenswrapper[4072]: I0223 12:58:33.154219 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.154506 master-0 kubenswrapper[4072]: I0223 12:58:33.154290 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.154506 master-0 kubenswrapper[4072]: I0223 12:58:33.154452 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.154506 master-0 kubenswrapper[4072]: I0223 12:58:33.154511 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154545 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154578 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154609 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154665 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154714 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154745 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154775 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154808 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.154842 master-0 kubenswrapper[4072]: I0223 12:58:33.154841 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.155448 master-0 kubenswrapper[4072]: I0223 12:58:33.154874 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.155448 master-0 kubenswrapper[4072]: I0223 12:58:33.154904 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.224116 master-0 kubenswrapper[4072]: E0223 12:58:33.223879 4072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1896e1903197c319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,LastTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:33.255742 master-0 kubenswrapper[4072]: I0223 12:58:33.255656 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.255742 master-0 kubenswrapper[4072]: I0223 12:58:33.255733 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.255995 master-0 kubenswrapper[4072]: I0223 12:58:33.255945 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256066 master-0 kubenswrapper[4072]: I0223 12:58:33.256028 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.256130 master-0 kubenswrapper[4072]: I0223 12:58:33.256055 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256130 master-0 kubenswrapper[4072]: I0223 12:58:33.256099 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256135 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256175 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256200 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256207 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256210 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256319 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256321 master-0 kubenswrapper[4072]: I0223 12:58:33.256334 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256294 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256445 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256559 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256636 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256663 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256618 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.256867 master-0 kubenswrapper[4072]: I0223 12:58:33.256764 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.256864 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.256961 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.256992 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257024 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257054 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257072 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257089 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257144 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257157 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257226 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257282 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257304 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257365 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.257627 master-0 kubenswrapper[4072]: I0223 12:58:33.257380 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.308916 master-0 kubenswrapper[4072]: I0223 12:58:33.308818 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.310880 master-0 kubenswrapper[4072]: I0223 12:58:33.310829 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.311023 master-0 kubenswrapper[4072]: I0223 12:58:33.310901 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.311023 master-0 kubenswrapper[4072]: I0223 12:58:33.310919 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.311133 master-0 kubenswrapper[4072]: I0223 12:58:33.311055 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:33.312463 master-0 kubenswrapper[4072]: E0223 12:58:33.312368 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:33.459839 master-0 kubenswrapper[4072]: E0223 12:58:33.459756 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 23 12:58:33.475468 master-0 kubenswrapper[4072]: I0223 12:58:33.475402 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 12:58:33.489026 master-0 kubenswrapper[4072]: I0223 12:58:33.488981 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 12:58:33.508120 master-0 kubenswrapper[4072]: I0223 12:58:33.508087 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:33.521566 master-0 kubenswrapper[4072]: I0223 12:58:33.521502 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:33.529664 master-0 kubenswrapper[4072]: I0223 12:58:33.529621 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 12:58:33.709639 master-0 kubenswrapper[4072]: W0223 12:58:33.709466 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:33.709639 master-0 kubenswrapper[4072]: E0223 12:58:33.709577 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:33.713131 master-0 kubenswrapper[4072]: I0223 12:58:33.712994 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:33.714555 master-0 kubenswrapper[4072]: I0223 12:58:33.714495 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:33.714555 master-0 kubenswrapper[4072]: I0223 12:58:33.714556 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:33.714780 master-0 kubenswrapper[4072]: I0223 12:58:33.714581 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:33.714780 master-0 kubenswrapper[4072]: I0223 12:58:33.714656 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:33.715683 master-0 kubenswrapper[4072]: E0223 12:58:33.715621 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:33.845477 master-0 kubenswrapper[4072]: I0223 12:58:33.845392 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:34.020554 master-0 kubenswrapper[4072]: W0223 12:58:34.020349 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:34.020554 master-0 kubenswrapper[4072]: E0223 12:58:34.020485 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:34.127960 master-0 kubenswrapper[4072]: W0223 12:58:34.127695 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ad9373c007a4fcd25e70622bdc8deb.slice/crio-fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5 WatchSource:0}: Error finding container fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5: Status 404 returned error can't find the container with id fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5 Feb 23 12:58:34.128847 master-0 kubenswrapper[4072]: W0223 12:58:34.128674 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:34.128847 master-0 kubenswrapper[4072]: W0223 12:58:34.128717 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021 WatchSource:0}: Error finding container c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021: Status 404 returned error can't find the container with id c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021 Feb 23 12:58:34.128847 master-0 kubenswrapper[4072]: E0223 12:58:34.128759 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:34.135640 master-0 kubenswrapper[4072]: I0223 12:58:34.135496 4072 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 12:58:34.166493 master-0 kubenswrapper[4072]: W0223 12:58:34.166424 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02 WatchSource:0}: Error finding container f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02: Status 404 returned error can't find the container with id f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02 Feb 23 12:58:34.214900 master-0 kubenswrapper[4072]: W0223 12:58:34.214840 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e WatchSource:0}: Error finding container 986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e: Status 404 returned error can't find the container with id 986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e Feb 23 12:58:34.261792 master-0 kubenswrapper[4072]: E0223 12:58:34.261614 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 23 12:58:34.268419 master-0 kubenswrapper[4072]: W0223 12:58:34.268367 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c WatchSource:0}: Error finding container dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c: Status 404 returned error can't find the container with id dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c Feb 23 12:58:34.329735 master-0 kubenswrapper[4072]: W0223 12:58:34.329619 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:34.329735 master-0 kubenswrapper[4072]: E0223 12:58:34.329729 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:34.516672 master-0 kubenswrapper[4072]: I0223 12:58:34.516449 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:34.522854 master-0 kubenswrapper[4072]: I0223 12:58:34.522777 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:34.522854 master-0 kubenswrapper[4072]: I0223 12:58:34.522844 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:34.522854 master-0 kubenswrapper[4072]: I0223 12:58:34.522864 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:34.523201 master-0 kubenswrapper[4072]: I0223 12:58:34.522941 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:34.524298 master-0 kubenswrapper[4072]: E0223 12:58:34.524188 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:34.779282 master-0 kubenswrapper[4072]: I0223 12:58:34.779036 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:58:34.781314 master-0 kubenswrapper[4072]: E0223 12:58:34.781215 4072 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:34.845410 master-0 kubenswrapper[4072]: I0223 12:58:34.845337 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:35.035967 master-0 kubenswrapper[4072]: I0223 12:58:35.035649 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c"} Feb 23 12:58:35.037879 master-0 kubenswrapper[4072]: I0223 12:58:35.037785 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e"} Feb 23 12:58:35.039268 master-0 kubenswrapper[4072]: I0223 12:58:35.039201 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02"} Feb 23 12:58:35.041102 master-0 kubenswrapper[4072]: I0223 12:58:35.041061 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021"} Feb 23 12:58:35.042772 master-0 kubenswrapper[4072]: I0223 12:58:35.042721 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5"} Feb 23 12:58:35.845182 master-0 kubenswrapper[4072]: I0223 12:58:35.845000 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:35.863537 master-0 kubenswrapper[4072]: E0223 12:58:35.863461 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 23 12:58:36.125264 master-0 kubenswrapper[4072]: I0223 12:58:36.125073 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:36.126401 master-0 kubenswrapper[4072]: I0223 12:58:36.126353 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:36.126484 master-0 kubenswrapper[4072]: I0223 12:58:36.126410 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:36.126484 master-0 kubenswrapper[4072]: I0223 12:58:36.126422 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:36.126612 master-0 kubenswrapper[4072]: I0223 12:58:36.126496 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:36.127783 master-0 kubenswrapper[4072]: E0223 12:58:36.127737 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:36.566600 master-0 kubenswrapper[4072]: W0223 12:58:36.566428 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:36.566600 master-0 kubenswrapper[4072]: E0223 12:58:36.566519 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:36.734033 master-0 kubenswrapper[4072]: W0223 12:58:36.733881 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:36.734033 master-0 kubenswrapper[4072]: E0223 12:58:36.733990 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:36.845607 master-0 kubenswrapper[4072]: I0223 12:58:36.845402 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:37.031125 master-0 kubenswrapper[4072]: W0223 12:58:37.030971 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:37.031505 master-0 kubenswrapper[4072]: E0223 12:58:37.031151 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:37.121337 master-0 kubenswrapper[4072]: W0223 12:58:37.121142 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:37.121582 master-0 kubenswrapper[4072]: E0223 12:58:37.121357 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:37.845612 master-0 kubenswrapper[4072]: I0223 12:58:37.845470 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:38.797305 master-0 kubenswrapper[4072]: I0223 12:58:38.797206 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:58:38.799011 master-0 kubenswrapper[4072]: E0223 12:58:38.798951 4072 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:38.844512 master-0 kubenswrapper[4072]: I0223 12:58:38.844475 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:39.065285 master-0 kubenswrapper[4072]: E0223 12:58:39.065061 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 23 12:58:39.329032 master-0 kubenswrapper[4072]: I0223 12:58:39.328856 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:39.330420 master-0 kubenswrapper[4072]: I0223 12:58:39.330375 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:39.330474 master-0 kubenswrapper[4072]: I0223 12:58:39.330448 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:39.330474 master-0 kubenswrapper[4072]: I0223 12:58:39.330459 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:39.330560 master-0 kubenswrapper[4072]: I0223 12:58:39.330533 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:39.331821 master-0 kubenswrapper[4072]: E0223 12:58:39.331701 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 23 12:58:39.845649 master-0 kubenswrapper[4072]: I0223 12:58:39.845566 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:40.238449 master-0 kubenswrapper[4072]: W0223 12:58:40.238376 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:40.238818 master-0 kubenswrapper[4072]: E0223 12:58:40.238469 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:40.318192 master-0 kubenswrapper[4072]: W0223 12:58:40.318138 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:40.318286 master-0 kubenswrapper[4072]: E0223 12:58:40.318216 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:40.845633 master-0 kubenswrapper[4072]: I0223 12:58:40.845569 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:41.060802 master-0 kubenswrapper[4072]: I0223 12:58:41.060754 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b2243c1b0e1a884637ce32ff21a340a8fd2d151e689c0ac21c3f49c0279d57f8"} Feb 23 12:58:41.060802 master-0 kubenswrapper[4072]: I0223 12:58:41.060806 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b58d0f68f1bce11a0ca3232dc9f5a8f1bbd2f9babb595ae60e80f32714fa923e"} Feb 23 12:58:41.060802 master-0 kubenswrapper[4072]: I0223 12:58:41.060782 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:41.062697 master-0 kubenswrapper[4072]: I0223 12:58:41.062647 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:41.062762 master-0 kubenswrapper[4072]: I0223 12:58:41.062715 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:41.062762 master-0 kubenswrapper[4072]: I0223 12:58:41.062735 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:41.064289 master-0 kubenswrapper[4072]: I0223 12:58:41.064234 4072 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2" exitCode=0 Feb 23 12:58:41.064397 master-0 kubenswrapper[4072]: I0223 12:58:41.064367 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2"} Feb 23 12:58:41.064441 master-0 kubenswrapper[4072]: I0223 12:58:41.064398 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:41.065606 master-0 kubenswrapper[4072]: I0223 12:58:41.065581 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:41.065672 master-0 kubenswrapper[4072]: I0223 12:58:41.065635 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:41.065672 master-0 kubenswrapper[4072]: I0223 12:58:41.065657 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:41.339658 master-0 kubenswrapper[4072]: W0223 12:58:41.339612 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:41.340035 master-0 kubenswrapper[4072]: E0223 12:58:41.339690 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:41.527941 master-0 kubenswrapper[4072]: W0223 12:58:41.527878 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:41.527941 master-0 kubenswrapper[4072]: E0223 12:58:41.527933 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 12:58:41.845489 master-0 kubenswrapper[4072]: I0223 12:58:41.845386 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:42.068414 master-0 kubenswrapper[4072]: I0223 12:58:42.068305 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 23 12:58:42.068974 master-0 kubenswrapper[4072]: I0223 12:58:42.068911 4072 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="d0d31020195198fe76c1d68fe1110293d627cb57df5479db81725c577f4e8eb0" exitCode=1 Feb 23 12:58:42.069056 master-0 kubenswrapper[4072]: I0223 12:58:42.069027 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:42.069056 master-0 kubenswrapper[4072]: I0223 12:58:42.069046 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:42.069056 master-0 kubenswrapper[4072]: I0223 12:58:42.069039 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"d0d31020195198fe76c1d68fe1110293d627cb57df5479db81725c577f4e8eb0"} Feb 23 12:58:42.070078 master-0 kubenswrapper[4072]: I0223 12:58:42.070041 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:42.070164 master-0 kubenswrapper[4072]: I0223 12:58:42.070093 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:42.070164 master-0 kubenswrapper[4072]: I0223 12:58:42.070112 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:42.070573 master-0 kubenswrapper[4072]: I0223 12:58:42.070540 4072 scope.go:117] "RemoveContainer" containerID="d0d31020195198fe76c1d68fe1110293d627cb57df5479db81725c577f4e8eb0" Feb 23 12:58:42.071150 master-0 kubenswrapper[4072]: I0223 12:58:42.071052 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:42.071150 master-0 kubenswrapper[4072]: I0223 12:58:42.071080 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:42.071150 master-0 kubenswrapper[4072]: I0223 12:58:42.071096 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:42.845100 master-0 kubenswrapper[4072]: I0223 12:58:42.845026 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:43.006532 master-0 kubenswrapper[4072]: E0223 12:58:43.006484 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:58:43.225470 master-0 kubenswrapper[4072]: E0223 12:58:43.225235 4072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1896e1903197c319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,LastTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:43.846998 master-0 kubenswrapper[4072]: I0223 12:58:43.846834 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 12:58:44.077386 master-0 kubenswrapper[4072]: I0223 12:58:44.077126 4072 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b" exitCode=0 Feb 23 12:58:44.077386 master-0 kubenswrapper[4072]: I0223 12:58:44.077274 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:44.078087 master-0 kubenswrapper[4072]: I0223 12:58:44.078017 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b"} Feb 23 12:58:44.078775 master-0 kubenswrapper[4072]: I0223 12:58:44.078744 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:44.078994 master-0 kubenswrapper[4072]: I0223 12:58:44.078949 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:44.079156 master-0 kubenswrapper[4072]: I0223 12:58:44.079121 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:44.080231 master-0 kubenswrapper[4072]: I0223 12:58:44.080164 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:58:44.081186 master-0 kubenswrapper[4072]: I0223 12:58:44.081132 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 23 12:58:44.082359 master-0 kubenswrapper[4072]: I0223 12:58:44.082090 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:44.082359 master-0 kubenswrapper[4072]: I0223 12:58:44.082118 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c"} Feb 23 12:58:44.082359 master-0 kubenswrapper[4072]: I0223 12:58:44.082193 4072 scope.go:117] "RemoveContainer" containerID="d0d31020195198fe76c1d68fe1110293d627cb57df5479db81725c577f4e8eb0" Feb 23 12:58:44.082614 master-0 kubenswrapper[4072]: I0223 12:58:44.082044 4072 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c" exitCode=1 Feb 23 12:58:44.083221 master-0 kubenswrapper[4072]: I0223 12:58:44.083183 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:44.083330 master-0 kubenswrapper[4072]: I0223 12:58:44.083229 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:44.083330 master-0 kubenswrapper[4072]: I0223 12:58:44.083268 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:44.083773 master-0 kubenswrapper[4072]: I0223 12:58:44.083744 4072 scope.go:117] "RemoveContainer" containerID="1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c" Feb 23 12:58:44.084070 master-0 kubenswrapper[4072]: E0223 12:58:44.084017 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:58:44.093614 master-0 kubenswrapper[4072]: I0223 12:58:44.093560 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d"} Feb 23 12:58:44.093716 master-0 kubenswrapper[4072]: I0223 12:58:44.093669 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:44.094814 master-0 kubenswrapper[4072]: I0223 12:58:44.094778 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:44.094814 master-0 kubenswrapper[4072]: I0223 12:58:44.094817 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:44.094980 master-0 kubenswrapper[4072]: I0223 12:58:44.094834 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:44.099411 master-0 kubenswrapper[4072]: I0223 12:58:44.099371 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db"} Feb 23 12:58:44.103059 master-0 kubenswrapper[4072]: I0223 12:58:44.102984 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:44.104233 master-0 kubenswrapper[4072]: I0223 12:58:44.104200 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:44.104357 master-0 kubenswrapper[4072]: I0223 12:58:44.104273 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:44.104357 master-0 kubenswrapper[4072]: I0223 12:58:44.104291 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:45.132756 master-0 kubenswrapper[4072]: I0223 12:58:45.132653 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:58:45.133655 master-0 kubenswrapper[4072]: I0223 12:58:45.133449 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:45.134472 master-0 kubenswrapper[4072]: I0223 12:58:45.134437 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:45.134556 master-0 kubenswrapper[4072]: I0223 12:58:45.134485 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:45.134556 master-0 kubenswrapper[4072]: I0223 12:58:45.134497 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:45.134888 master-0 kubenswrapper[4072]: I0223 12:58:45.134868 4072 scope.go:117] "RemoveContainer" containerID="1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c" Feb 23 12:58:45.135085 master-0 kubenswrapper[4072]: E0223 12:58:45.135059 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:58:45.137855 master-0 kubenswrapper[4072]: I0223 12:58:45.137807 4072 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db" exitCode=1 Feb 23 12:58:45.137936 master-0 kubenswrapper[4072]: I0223 12:58:45.137877 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db"} Feb 23 12:58:45.140406 master-0 kubenswrapper[4072]: I0223 12:58:45.140384 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:45.140500 master-0 kubenswrapper[4072]: I0223 12:58:45.140389 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33"} Feb 23 12:58:45.141227 master-0 kubenswrapper[4072]: I0223 12:58:45.141210 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:45.141305 master-0 kubenswrapper[4072]: I0223 12:58:45.141292 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:45.141305 master-0 kubenswrapper[4072]: I0223 12:58:45.141304 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:45.732970 master-0 kubenswrapper[4072]: I0223 12:58:45.732896 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:45.735840 master-0 kubenswrapper[4072]: I0223 12:58:45.735797 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:45.735913 master-0 kubenswrapper[4072]: I0223 12:58:45.735863 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:45.735913 master-0 kubenswrapper[4072]: I0223 12:58:45.735878 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:45.736029 master-0 kubenswrapper[4072]: I0223 12:58:45.736004 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:45.910362 master-0 kubenswrapper[4072]: E0223 12:58:45.910209 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:58:45.910362 master-0 kubenswrapper[4072]: I0223 12:58:45.910255 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:45.910362 master-0 kubenswrapper[4072]: E0223 12:58:45.910338 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:58:46.850520 master-0 kubenswrapper[4072]: I0223 12:58:46.850458 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:47.123489 master-0 kubenswrapper[4072]: I0223 12:58:47.123320 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 12:58:47.152841 master-0 kubenswrapper[4072]: I0223 12:58:47.152789 4072 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 12:58:47.851924 master-0 kubenswrapper[4072]: I0223 12:58:47.851809 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:48.160877 master-0 kubenswrapper[4072]: I0223 12:58:48.160698 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e"} Feb 23 12:58:48.160877 master-0 kubenswrapper[4072]: I0223 12:58:48.160792 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:48.162511 master-0 kubenswrapper[4072]: I0223 12:58:48.162466 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:48.162633 master-0 kubenswrapper[4072]: I0223 12:58:48.162525 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:48.162633 master-0 kubenswrapper[4072]: I0223 12:58:48.162544 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:48.163060 master-0 kubenswrapper[4072]: I0223 12:58:48.163020 4072 scope.go:117] "RemoveContainer" containerID="7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db" Feb 23 12:58:48.164416 master-0 kubenswrapper[4072]: I0223 12:58:48.164334 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe"} Feb 23 12:58:48.164620 master-0 kubenswrapper[4072]: I0223 12:58:48.164566 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:48.165857 master-0 kubenswrapper[4072]: I0223 12:58:48.165806 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:48.165857 master-0 kubenswrapper[4072]: I0223 12:58:48.165859 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:48.166063 master-0 kubenswrapper[4072]: I0223 12:58:48.165877 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:48.852395 master-0 kubenswrapper[4072]: I0223 12:58:48.852305 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:49.079861 master-0 kubenswrapper[4072]: I0223 12:58:49.079781 4072 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:49.091382 master-0 kubenswrapper[4072]: W0223 12:58:49.091316 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 12:58:49.091518 master-0 kubenswrapper[4072]: E0223 12:58:49.091398 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 23 12:58:49.171927 master-0 kubenswrapper[4072]: I0223 12:58:49.171737 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7"} Feb 23 12:58:49.171927 master-0 kubenswrapper[4072]: I0223 12:58:49.171806 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:49.172917 master-0 kubenswrapper[4072]: I0223 12:58:49.171809 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:49.173335 master-0 kubenswrapper[4072]: I0223 12:58:49.173281 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:49.173690 master-0 kubenswrapper[4072]: I0223 12:58:49.173423 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:49.173822 master-0 kubenswrapper[4072]: I0223 12:58:49.173798 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:49.173937 master-0 kubenswrapper[4072]: I0223 12:58:49.173917 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:49.174113 master-0 kubenswrapper[4072]: I0223 12:58:49.173887 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:49.174113 master-0 kubenswrapper[4072]: I0223 12:58:49.174111 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:49.774040 master-0 kubenswrapper[4072]: I0223 12:58:49.773982 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:49.779952 master-0 kubenswrapper[4072]: I0223 12:58:49.779913 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:49.853527 master-0 kubenswrapper[4072]: I0223 12:58:49.853455 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:50.174877 master-0 kubenswrapper[4072]: I0223 12:58:50.174721 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:50.175786 master-0 kubenswrapper[4072]: I0223 12:58:50.174747 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:50.175953 master-0 kubenswrapper[4072]: I0223 12:58:50.174941 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:50.177768 master-0 kubenswrapper[4072]: I0223 12:58:50.177720 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:50.177910 master-0 kubenswrapper[4072]: I0223 12:58:50.177732 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:50.177910 master-0 kubenswrapper[4072]: I0223 12:58:50.177795 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:50.177910 master-0 kubenswrapper[4072]: I0223 12:58:50.177820 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:50.177910 master-0 kubenswrapper[4072]: I0223 12:58:50.177832 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:50.177910 master-0 kubenswrapper[4072]: I0223 12:58:50.177860 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:50.185824 master-0 kubenswrapper[4072]: I0223 12:58:50.185764 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 12:58:50.204971 master-0 kubenswrapper[4072]: W0223 12:58:50.204890 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 12:58:50.205100 master-0 kubenswrapper[4072]: E0223 12:58:50.204973 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:58:50.695840 master-0 kubenswrapper[4072]: W0223 12:58:50.695704 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:50.695840 master-0 kubenswrapper[4072]: E0223 12:58:50.695805 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:58:50.852671 master-0 kubenswrapper[4072]: I0223 12:58:50.852528 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:51.177709 master-0 kubenswrapper[4072]: I0223 12:58:51.177532 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:51.178820 master-0 kubenswrapper[4072]: I0223 12:58:51.178775 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:51.178891 master-0 kubenswrapper[4072]: I0223 12:58:51.178841 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:51.178891 master-0 kubenswrapper[4072]: I0223 12:58:51.178863 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:51.853205 master-0 kubenswrapper[4072]: I0223 12:58:51.852436 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:52.180613 master-0 kubenswrapper[4072]: I0223 12:58:52.180381 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:52.181901 master-0 kubenswrapper[4072]: I0223 12:58:52.181803 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:52.181901 master-0 kubenswrapper[4072]: I0223 12:58:52.181898 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:52.182058 master-0 kubenswrapper[4072]: I0223 12:58:52.181918 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:52.852837 master-0 kubenswrapper[4072]: I0223 12:58:52.852734 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:52.911740 master-0 kubenswrapper[4072]: I0223 12:58:52.911549 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:52.914229 master-0 kubenswrapper[4072]: I0223 12:58:52.913964 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:52.914229 master-0 kubenswrapper[4072]: I0223 12:58:52.914088 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:52.914229 master-0 kubenswrapper[4072]: I0223 12:58:52.914114 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:52.914229 master-0 kubenswrapper[4072]: I0223 12:58:52.914232 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:52.919234 master-0 kubenswrapper[4072]: E0223 12:58:52.919193 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:58:52.920346 master-0 kubenswrapper[4072]: E0223 12:58:52.920278 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:58:53.006740 master-0 kubenswrapper[4072]: E0223 12:58:53.006677 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:58:53.188304 master-0 kubenswrapper[4072]: W0223 12:58:53.188274 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 12:58:53.188906 master-0 kubenswrapper[4072]: E0223 12:58:53.188882 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 23 12:58:53.233442 master-0 kubenswrapper[4072]: E0223 12:58:53.233230 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1903197c319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,LastTimestamp:2026-02-23 12:58:32.840438553 +0000 UTC m=+0.650595205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.240845 master-0 kubenswrapper[4072]: E0223 12:58:53.240748 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.248398 master-0 kubenswrapper[4072]: E0223 12:58:53.248174 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.255791 master-0 kubenswrapper[4072]: E0223 12:58:53.255654 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.268707 master-0 kubenswrapper[4072]: E0223 12:58:53.268428 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e1903b961bd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:33.008102354 +0000 UTC m=+0.818259006,LastTimestamp:2026-02-23 12:58:33.008102354 +0000 UTC m=+0.818259006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.279052 master-0 kubenswrapper[4072]: E0223 12:58:53.278835 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.106597518 +0000 UTC m=+0.916754170,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.285979 master-0 kubenswrapper[4072]: E0223 12:58:53.285176 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.106621569 +0000 UTC m=+0.916778211,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.292987 master-0 kubenswrapper[4072]: E0223 12:58:53.292815 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.10663735 +0000 UTC m=+0.916794002,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.300032 master-0 kubenswrapper[4072]: E0223 12:58:53.299846 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.130298391 +0000 UTC m=+0.940455033,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.307179 master-0 kubenswrapper[4072]: E0223 12:58:53.307048 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.130327765 +0000 UTC m=+0.940484407,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.314101 master-0 kubenswrapper[4072]: E0223 12:58:53.313931 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.130345068 +0000 UTC m=+0.940501720,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.321460 master-0 kubenswrapper[4072]: E0223 12:58:53.321288 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.131724514 +0000 UTC m=+0.941881166,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.328357 master-0 kubenswrapper[4072]: E0223 12:58:53.328191 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.131747938 +0000 UTC m=+0.941904580,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.335008 master-0 kubenswrapper[4072]: E0223 12:58:53.334853 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.13176452 +0000 UTC m=+0.941921172,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.342282 master-0 kubenswrapper[4072]: E0223 12:58:53.341718 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.13223018 +0000 UTC m=+0.942386832,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.348594 master-0 kubenswrapper[4072]: E0223 12:58:53.348437 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.13229482 +0000 UTC m=+0.942451472,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.360699 master-0 kubenswrapper[4072]: E0223 12:58:53.360538 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.132313033 +0000 UTC m=+0.942469675,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.370311 master-0 kubenswrapper[4072]: E0223 12:58:53.369465 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.133318403 +0000 UTC m=+0.943475045,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.378767 master-0 kubenswrapper[4072]: E0223 12:58:53.378589 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.133345627 +0000 UTC m=+0.943502279,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.386828 master-0 kubenswrapper[4072]: E0223 12:58:53.386587 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.13336372 +0000 UTC m=+0.943520372,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.394504 master-0 kubenswrapper[4072]: E0223 12:58:53.394213 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.133440272 +0000 UTC m=+0.943596914,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.402645 master-0 kubenswrapper[4072]: E0223 12:58:53.402490 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.133465075 +0000 UTC m=+0.943621717,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.409524 master-0 kubenswrapper[4072]: E0223 12:58:53.409382 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ec5b2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ec5b2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913091372 +0000 UTC m=+0.723248024,LastTimestamp:2026-02-23 12:58:33.133480057 +0000 UTC m=+0.943636709,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.416971 master-0 kubenswrapper[4072]: E0223 12:58:53.416812 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035ea29d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035ea29d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.912947665 +0000 UTC m=+0.723104377,LastTimestamp:2026-02-23 12:58:33.134804946 +0000 UTC m=+0.944961598,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.424290 master-0 kubenswrapper[4072]: E0223 12:58:53.424102 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1896e19035eb62f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1896e19035eb62f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:32.913027829 +0000 UTC m=+0.723184491,LastTimestamp:2026-02-23 12:58:33.134865815 +0000 UTC m=+0.945022467,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.432862 master-0 kubenswrapper[4072]: E0223 12:58:53.432659 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e1907ec750e4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:34.135400676 +0000 UTC m=+1.945557328,LastTimestamp:2026-02-23 12:58:34.135400676 +0000 UTC m=+1.945557328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.440055 master-0 kubenswrapper[4072]: E0223 12:58:53.439760 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1907ec80c2f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:34.135448623 +0000 UTC m=+1.945605265,LastTimestamp:2026-02-23 12:58:34.135448623 +0000 UTC m=+1.945605265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.447261 master-0 kubenswrapper[4072]: E0223 12:58:53.447015 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e19080c7b5ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:34.168980908 +0000 UTC m=+1.979137550,LastTimestamp:2026-02-23 12:58:34.168980908 +0000 UTC m=+1.979137550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.455071 master-0 kubenswrapper[4072]: E0223 12:58:53.454875 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e19083b2aecf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:34.217934543 +0000 UTC m=+2.028091195,LastTimestamp:2026-02-23 12:58:34.217934543 +0000 UTC m=+2.028091195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.462627 master-0 kubenswrapper[4072]: E0223 12:58:53.462390 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e19086e276ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:34.271397614 +0000 UTC m=+2.081554256,LastTimestamp:2026-02-23 12:58:34.271397614 +0000 UTC m=+2.081554256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.471126 master-0 kubenswrapper[4072]: E0223 12:58:53.470932 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e191e45b0e31 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 5.965s (5.965s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.134549041 +0000 UTC m=+7.944705653,LastTimestamp:2026-02-23 12:58:40.134549041 +0000 UTC m=+7.944705653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.478208 master-0 kubenswrapper[4072]: E0223 12:58:53.477966 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e191e5330684 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 5.93s (5.93s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.148702852 +0000 UTC m=+7.958859454,LastTimestamp:2026-02-23 12:58:40.148702852 +0000 UTC m=+7.958859454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.485577 master-0 kubenswrapper[4072]: E0223 12:58:53.485397 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e191f16d12cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.353833675 +0000 UTC m=+8.163990287,LastTimestamp:2026-02-23 12:58:40.353833675 +0000 UTC m=+8.163990287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.492797 master-0 kubenswrapper[4072]: E0223 12:58:53.492599 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e191f1a1e26a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.357294698 +0000 UTC m=+8.167451310,LastTimestamp:2026-02-23 12:58:40.357294698 +0000 UTC m=+8.167451310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.499692 master-0 kubenswrapper[4072]: E0223 12:58:53.499561 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e191f26ee46d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.370730093 +0000 UTC m=+8.180886705,LastTimestamp:2026-02-23 12:58:40.370730093 +0000 UTC m=+8.180886705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.507072 master-0 kubenswrapper[4072]: E0223 12:58:53.506893 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e191f2cad245 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.376754757 +0000 UTC m=+8.186911359,LastTimestamp:2026-02-23 12:58:40.376754757 +0000 UTC m=+8.186911359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.514548 master-0 kubenswrapper[4072]: E0223 12:58:53.514376 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e191f33a55d7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.384062935 +0000 UTC m=+8.194219547,LastTimestamp:2026-02-23 12:58:40.384062935 +0000 UTC m=+8.194219547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.521313 master-0 kubenswrapper[4072]: E0223 12:58:53.521125 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e192019ad35a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.625267546 +0000 UTC m=+8.435424158,LastTimestamp:2026-02-23 12:58:40.625267546 +0000 UTC m=+8.435424158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.527954 master-0 kubenswrapper[4072]: E0223 12:58:53.527796 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e192025d504d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:40.638013517 +0000 UTC m=+8.448170139,LastTimestamp:2026-02-23 12:58:40.638013517 +0000 UTC m=+8.448170139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.541704 master-0 kubenswrapper[4072]: E0223 12:58:53.541014 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1921c0d9abd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.068997309 +0000 UTC m=+8.879153961,LastTimestamp:2026-02-23 12:58:41.068997309 +0000 UTC m=+8.879153961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.548758 master-0 kubenswrapper[4072]: E0223 12:58:53.548608 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922a56a947 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.308666183 +0000 UTC m=+9.118822805,LastTimestamp:2026-02-23 12:58:41.308666183 +0000 UTC m=+9.118822805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.562652 master-0 kubenswrapper[4072]: E0223 12:58:53.562377 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922b4909f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.324550647 +0000 UTC m=+9.134707259,LastTimestamp:2026-02-23 12:58:41.324550647 +0000 UTC m=+9.134707259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.571531 master-0 kubenswrapper[4072]: E0223 12:58:53.571342 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1921c0d9abd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1921c0d9abd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.068997309 +0000 UTC m=+8.879153961,LastTimestamp:2026-02-23 12:58:43.467390847 +0000 UTC m=+11.277547459,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.583460 master-0 kubenswrapper[4072]: E0223 12:58:53.583267 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e192b1b18c47 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.444s (9.444s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.579546695 +0000 UTC m=+11.389703317,LastTimestamp:2026-02-23 12:58:43.579546695 +0000 UTC m=+11.389703317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.589980 master-0 kubenswrapper[4072]: E0223 12:58:53.589815 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192b3c5a33f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.478s (9.478s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.614417727 +0000 UTC m=+11.424574349,LastTimestamp:2026-02-23 12:58:43.614417727 +0000 UTC m=+11.424574349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.594897 master-0 kubenswrapper[4072]: E0223 12:58:53.594778 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192b5a4e713 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.374s (9.374s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.645826835 +0000 UTC m=+11.455983467,LastTimestamp:2026-02-23 12:58:43.645826835 +0000 UTC m=+11.455983467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.599648 master-0 kubenswrapper[4072]: E0223 12:58:53.599562 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1922a56a947\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922a56a947 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.308666183 +0000 UTC m=+9.118822805,LastTimestamp:2026-02-23 12:58:43.788040858 +0000 UTC m=+11.598197480,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.605505 master-0 kubenswrapper[4072]: E0223 12:58:53.605409 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1922b4909f7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922b4909f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.324550647 +0000 UTC m=+9.134707259,LastTimestamp:2026-02-23 12:58:43.805189411 +0000 UTC m=+11.615346033,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.609706 master-0 kubenswrapper[4072]: E0223 12:58:53.609619 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e192bfc9fcd9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.816029401 +0000 UTC m=+11.626186023,LastTimestamp:2026-02-23 12:58:43.816029401 +0000 UTC m=+11.626186023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.614376 master-0 kubenswrapper[4072]: E0223 12:58:53.614208 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1896e192c09d51cb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.829879243 +0000 UTC m=+11.640035865,LastTimestamp:2026-02-23 12:58:43.829879243 +0000 UTC m=+11.640035865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.620396 master-0 kubenswrapper[4072]: E0223 12:58:53.620225 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192c3a43bfd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.880664061 +0000 UTC m=+11.690820703,LastTimestamp:2026-02-23 12:58:43.880664061 +0000 UTC m=+11.690820703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.624457 master-0 kubenswrapper[4072]: E0223 12:58:53.624328 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192c46ae2b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.893682868 +0000 UTC m=+11.703839520,LastTimestamp:2026-02-23 12:58:43.893682868 +0000 UTC m=+11.703839520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.630649 master-0 kubenswrapper[4072]: E0223 12:58:53.630572 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192c473b729 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.894261545 +0000 UTC m=+11.704418207,LastTimestamp:2026-02-23 12:58:43.894261545 +0000 UTC m=+11.704418207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.636546 master-0 kubenswrapper[4072]: E0223 12:58:53.636465 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192c487e966 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.895585126 +0000 UTC m=+11.705741778,LastTimestamp:2026-02-23 12:58:43.895585126 +0000 UTC m=+11.705741778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.642020 master-0 kubenswrapper[4072]: E0223 12:58:53.641459 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192c53cad2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.907431727 +0000 UTC m=+11.717588369,LastTimestamp:2026-02-23 12:58:43.907431727 +0000 UTC m=+11.717588369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.648718 master-0 kubenswrapper[4072]: E0223 12:58:53.648526 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e192cfc21298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.083946136 +0000 UTC m=+11.894102788,LastTimestamp:2026-02-23 12:58:44.083946136 +0000 UTC m=+11.894102788,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.656335 master-0 kubenswrapper[4072]: E0223 12:58:53.656057 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192d0e29b8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.102855562 +0000 UTC m=+11.913012204,LastTimestamp:2026-02-23 12:58:44.102855562 +0000 UTC m=+11.913012204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.664238 master-0 kubenswrapper[4072]: E0223 12:58:53.664020 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192e0b88ad4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.368534228 +0000 UTC m=+12.178690880,LastTimestamp:2026-02-23 12:58:44.368534228 +0000 UTC m=+12.178690880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.669353 master-0 kubenswrapper[4072]: E0223 12:58:53.669202 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192e17f1de5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.381548005 +0000 UTC m=+12.191704627,LastTimestamp:2026-02-23 12:58:44.381548005 +0000 UTC m=+12.191704627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.676271 master-0 kubenswrapper[4072]: E0223 12:58:53.676097 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e192e191f7a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.382783392 +0000 UTC m=+12.192940014,LastTimestamp:2026-02-23 12:58:44.382783392 +0000 UTC m=+12.192940014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.683614 master-0 kubenswrapper[4072]: E0223 12:58:53.683449 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e192cfc21298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e192cfc21298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.083946136 +0000 UTC m=+11.894102788,LastTimestamp:2026-02-23 12:58:45.135021996 +0000 UTC m=+12.945178618,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.691528 master-0 kubenswrapper[4072]: E0223 12:58:53.691209 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e193887bec0d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 3.287s (3.287s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.183133709 +0000 UTC m=+14.993290331,LastTimestamp:2026-02-23 12:58:47.183133709 +0000 UTC m=+14.993290331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.698909 master-0 kubenswrapper[4072]: E0223 12:58:53.698759 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e1938b7682c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 2.85s (2.85s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.233110722 +0000 UTC m=+15.043267344,LastTimestamp:2026-02-23 12:58:47.233110722 +0000 UTC m=+15.043267344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.706107 master-0 kubenswrapper[4072]: E0223 12:58:53.705937 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e1939607b0a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.410397345 +0000 UTC m=+15.220553987,LastTimestamp:2026-02-23 12:58:47.410397345 +0000 UTC m=+15.220553987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.712658 master-0 kubenswrapper[4072]: E0223 12:58:53.712467 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e19396fecbaa kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.426591658 +0000 UTC m=+15.236748280,LastTimestamp:2026-02-23 12:58:47.426591658 +0000 UTC m=+15.236748280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.719367 master-0 kubenswrapper[4072]: E0223 12:58:53.719193 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e193970aeb6f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.427386223 +0000 UTC m=+15.237542845,LastTimestamp:2026-02-23 12:58:47.427386223 +0000 UTC m=+15.237542845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.726178 master-0 kubenswrapper[4072]: E0223 12:58:53.726041 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1896e19397ed4863 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:47.442221155 +0000 UTC m=+15.252377807,LastTimestamp:2026-02-23 12:58:47.442221155 +0000 UTC m=+15.252377807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.733606 master-0 kubenswrapper[4072]: E0223 12:58:53.733482 4072 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e193c35d29b7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:48.170973623 +0000 UTC m=+15.981130275,LastTimestamp:2026-02-23 12:58:48.170973623 +0000 UTC m=+15.981130275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.741035 master-0 kubenswrapper[4072]: E0223 12:58:53.740908 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1896e192c3a43bfd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192c3a43bfd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.880664061 +0000 UTC m=+11.690820703,LastTimestamp:2026-02-23 12:58:48.408211702 +0000 UTC m=+16.218368364,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.748210 master-0 kubenswrapper[4072]: E0223 12:58:53.748011 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1896e192c473b729\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1896e192c473b729 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:43.894261545 +0000 UTC m=+11.704418207,LastTimestamp:2026-02-23 12:58:48.422223569 +0000 UTC m=+16.232380221,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:53.852231 master-0 kubenswrapper[4072]: I0223 12:58:53.852140 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:54.042622 master-0 kubenswrapper[4072]: I0223 12:58:54.042397 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:54.042884 master-0 kubenswrapper[4072]: I0223 12:58:54.042740 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:54.044743 master-0 kubenswrapper[4072]: I0223 12:58:54.044690 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:54.044743 master-0 kubenswrapper[4072]: I0223 12:58:54.044740 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:54.044895 master-0 kubenswrapper[4072]: I0223 12:58:54.044794 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:54.052137 master-0 kubenswrapper[4072]: I0223 12:58:54.052079 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:54.186153 master-0 kubenswrapper[4072]: I0223 12:58:54.186037 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:54.186473 master-0 kubenswrapper[4072]: I0223 12:58:54.186160 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:54.187522 master-0 kubenswrapper[4072]: I0223 12:58:54.187452 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:54.187650 master-0 kubenswrapper[4072]: I0223 12:58:54.187532 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:54.187650 master-0 kubenswrapper[4072]: I0223 12:58:54.187555 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:54.851758 master-0 kubenswrapper[4072]: I0223 12:58:54.851648 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:54.859710 master-0 kubenswrapper[4072]: I0223 12:58:54.859653 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:55.188988 master-0 kubenswrapper[4072]: I0223 12:58:55.188916 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:55.190277 master-0 kubenswrapper[4072]: I0223 12:58:55.190195 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:55.190396 master-0 kubenswrapper[4072]: I0223 12:58:55.190292 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:55.190396 master-0 kubenswrapper[4072]: I0223 12:58:55.190315 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:55.852741 master-0 kubenswrapper[4072]: I0223 12:58:55.852656 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:55.926479 master-0 kubenswrapper[4072]: I0223 12:58:55.926333 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:55.933542 master-0 kubenswrapper[4072]: I0223 12:58:55.933480 4072 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:56.028887 master-0 kubenswrapper[4072]: I0223 12:58:56.028810 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:56.030429 master-0 kubenswrapper[4072]: I0223 12:58:56.030348 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:56.030533 master-0 kubenswrapper[4072]: I0223 12:58:56.030451 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:56.030533 master-0 kubenswrapper[4072]: I0223 12:58:56.030471 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:56.031157 master-0 kubenswrapper[4072]: I0223 12:58:56.031102 4072 scope.go:117] "RemoveContainer" containerID="1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c" Feb 23 12:58:56.043170 master-0 kubenswrapper[4072]: E0223 12:58:56.043064 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1921c0d9abd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1921c0d9abd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.068997309 +0000 UTC m=+8.879153961,LastTimestamp:2026-02-23 12:58:56.035456363 +0000 UTC m=+23.845612975,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:56.192667 master-0 kubenswrapper[4072]: I0223 12:58:56.192618 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:56.194090 master-0 kubenswrapper[4072]: I0223 12:58:56.194038 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:56.194220 master-0 kubenswrapper[4072]: I0223 12:58:56.194103 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:56.194220 master-0 kubenswrapper[4072]: I0223 12:58:56.194122 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:56.334059 master-0 kubenswrapper[4072]: E0223 12:58:56.333850 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1922a56a947\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922a56a947 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.308666183 +0000 UTC m=+9.118822805,LastTimestamp:2026-02-23 12:58:56.323274844 +0000 UTC m=+24.133431506,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:56.348077 master-0 kubenswrapper[4072]: E0223 12:58:56.347914 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e1922b4909f7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e1922b4909f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:41.324550647 +0000 UTC m=+9.134707259,LastTimestamp:2026-02-23 12:58:56.340316114 +0000 UTC m=+24.150472766,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:56.851590 master-0 kubenswrapper[4072]: I0223 12:58:56.851508 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:57.209301 master-0 kubenswrapper[4072]: I0223 12:58:57.209203 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:58:57.210391 master-0 kubenswrapper[4072]: I0223 12:58:57.209906 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 23 12:58:57.210541 master-0 kubenswrapper[4072]: I0223 12:58:57.210484 4072 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" exitCode=1 Feb 23 12:58:57.210675 master-0 kubenswrapper[4072]: I0223 12:58:57.210637 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:57.211675 master-0 kubenswrapper[4072]: I0223 12:58:57.211468 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd"} Feb 23 12:58:57.212313 master-0 kubenswrapper[4072]: I0223 12:58:57.212219 4072 scope.go:117] "RemoveContainer" containerID="1033a6063dcb61725480b2412d7de9e9458d159a0be8f602a59590661b5eca1c" Feb 23 12:58:57.212740 master-0 kubenswrapper[4072]: I0223 12:58:57.212232 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:57.214453 master-0 kubenswrapper[4072]: I0223 12:58:57.214391 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:57.214549 master-0 kubenswrapper[4072]: I0223 12:58:57.214462 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:57.214549 master-0 kubenswrapper[4072]: I0223 12:58:57.214481 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:57.215065 master-0 kubenswrapper[4072]: I0223 12:58:57.215023 4072 scope.go:117] "RemoveContainer" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" Feb 23 12:58:57.216365 master-0 kubenswrapper[4072]: E0223 12:58:57.216308 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:58:57.218992 master-0 kubenswrapper[4072]: I0223 12:58:57.218032 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:57.218992 master-0 kubenswrapper[4072]: I0223 12:58:57.218063 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:57.218992 master-0 kubenswrapper[4072]: I0223 12:58:57.218094 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:57.221688 master-0 kubenswrapper[4072]: I0223 12:58:57.221636 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:57.225428 master-0 kubenswrapper[4072]: E0223 12:58:57.225206 4072 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1896e192cfc21298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1896e192cfc21298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 12:58:44.083946136 +0000 UTC m=+11.894102788,LastTimestamp:2026-02-23 12:58:57.216196104 +0000 UTC m=+25.026352756,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 12:58:57.852160 master-0 kubenswrapper[4072]: I0223 12:58:57.852007 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:58.215703 master-0 kubenswrapper[4072]: I0223 12:58:58.215628 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:58:58.216720 master-0 kubenswrapper[4072]: I0223 12:58:58.216674 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:58.217716 master-0 kubenswrapper[4072]: I0223 12:58:58.217661 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:58.217811 master-0 kubenswrapper[4072]: I0223 12:58:58.217725 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:58.217811 master-0 kubenswrapper[4072]: I0223 12:58:58.217744 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:58.224297 master-0 kubenswrapper[4072]: I0223 12:58:58.224231 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 12:58:58.852853 master-0 kubenswrapper[4072]: I0223 12:58:58.852632 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:59.109657 master-0 kubenswrapper[4072]: I0223 12:58:59.109552 4072 csr.go:261] certificate signing request csr-b2kqh is approved, waiting to be issued Feb 23 12:58:59.219509 master-0 kubenswrapper[4072]: I0223 12:58:59.219449 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:59.220658 master-0 kubenswrapper[4072]: I0223 12:58:59.220613 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:59.220742 master-0 kubenswrapper[4072]: I0223 12:58:59.220671 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:59.220742 master-0 kubenswrapper[4072]: I0223 12:58:59.220690 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:59.852465 master-0 kubenswrapper[4072]: I0223 12:58:59.852410 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:58:59.920153 master-0 kubenswrapper[4072]: I0223 12:58:59.920059 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:58:59.922022 master-0 kubenswrapper[4072]: I0223 12:58:59.921949 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:58:59.922136 master-0 kubenswrapper[4072]: I0223 12:58:59.922036 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:58:59.922136 master-0 kubenswrapper[4072]: I0223 12:58:59.922063 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:58:59.922281 master-0 kubenswrapper[4072]: I0223 12:58:59.922160 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:58:59.931926 master-0 kubenswrapper[4072]: E0223 12:58:59.931861 4072 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 23 12:58:59.932401 master-0 kubenswrapper[4072]: E0223 12:58:59.932321 4072 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 23 12:59:00.852471 master-0 kubenswrapper[4072]: I0223 12:59:00.852350 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:01.853022 master-0 kubenswrapper[4072]: I0223 12:59:01.852914 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:02.851924 master-0 kubenswrapper[4072]: I0223 12:59:02.851805 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:03.007696 master-0 kubenswrapper[4072]: E0223 12:59:03.007599 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:03.101352 master-0 kubenswrapper[4072]: W0223 12:59:03.101285 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 12:59:03.101592 master-0 kubenswrapper[4072]: E0223 12:59:03.101352 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:03.852761 master-0 kubenswrapper[4072]: I0223 12:59:03.852671 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:04.735409 master-0 kubenswrapper[4072]: W0223 12:59:04.735335 4072 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 12:59:04.735409 master-0 kubenswrapper[4072]: E0223 12:59:04.735411 4072 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 23 12:59:04.853133 master-0 kubenswrapper[4072]: I0223 12:59:04.853074 4072 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 12:59:04.941662 master-0 kubenswrapper[4072]: I0223 12:59:04.941582 4072 csr.go:257] certificate signing request csr-b2kqh is issued Feb 23 12:59:05.704699 master-0 kubenswrapper[4072]: I0223 12:59:05.704641 4072 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 12:59:05.864412 master-0 kubenswrapper[4072]: I0223 12:59:05.864328 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:05.923168 master-0 kubenswrapper[4072]: I0223 12:59:05.923109 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:05.943712 master-0 kubenswrapper[4072]: I0223 12:59:05.943629 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 09:02:29.971523042 +0000 UTC Feb 23 12:59:05.943712 master-0 kubenswrapper[4072]: I0223 12:59:05.943674 4072 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h3m24.02785413s for next certificate rotation Feb 23 12:59:06.060668 master-0 kubenswrapper[4072]: I0223 12:59:06.060538 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:06.462138 master-0 kubenswrapper[4072]: I0223 12:59:06.461801 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:06.462138 master-0 kubenswrapper[4072]: E0223 12:59:06.461869 4072 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 23 12:59:06.870481 master-0 kubenswrapper[4072]: I0223 12:59:06.870317 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:06.927546 master-0 kubenswrapper[4072]: I0223 12:59:06.927478 4072 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 23 12:59:06.932829 master-0 kubenswrapper[4072]: I0223 12:59:06.932750 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:06.934465 master-0 kubenswrapper[4072]: I0223 12:59:06.934408 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:06.934465 master-0 kubenswrapper[4072]: I0223 12:59:06.934466 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:06.934722 master-0 kubenswrapper[4072]: I0223 12:59:06.934487 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:06.934722 master-0 kubenswrapper[4072]: I0223 12:59:06.934586 4072 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 12:59:07.363117 master-0 kubenswrapper[4072]: I0223 12:59:07.363004 4072 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 12:59:07.363117 master-0 kubenswrapper[4072]: E0223 12:59:07.363108 4072 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 23 12:59:07.713316 master-0 kubenswrapper[4072]: E0223 12:59:07.713236 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:07.814160 master-0 kubenswrapper[4072]: E0223 12:59:07.814093 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:07.915209 master-0 kubenswrapper[4072]: E0223 12:59:07.915139 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:07.980688 master-0 kubenswrapper[4072]: I0223 12:59:07.980529 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 12:59:07.994503 master-0 kubenswrapper[4072]: I0223 12:59:07.994431 4072 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 12:59:08.017157 master-0 kubenswrapper[4072]: E0223 12:59:08.017089 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.117860 master-0 kubenswrapper[4072]: E0223 12:59:08.117744 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.218482 master-0 kubenswrapper[4072]: E0223 12:59:08.218365 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.319098 master-0 kubenswrapper[4072]: E0223 12:59:08.318863 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.420162 master-0 kubenswrapper[4072]: E0223 12:59:08.420074 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.520640 master-0 kubenswrapper[4072]: E0223 12:59:08.520549 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.621311 master-0 kubenswrapper[4072]: E0223 12:59:08.621142 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.721921 master-0 kubenswrapper[4072]: E0223 12:59:08.721847 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.822823 master-0 kubenswrapper[4072]: E0223 12:59:08.822763 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:08.923302 master-0 kubenswrapper[4072]: E0223 12:59:08.923121 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.024110 master-0 kubenswrapper[4072]: E0223 12:59:09.024042 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.124553 master-0 kubenswrapper[4072]: E0223 12:59:09.124471 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.225608 master-0 kubenswrapper[4072]: E0223 12:59:09.225489 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.326403 master-0 kubenswrapper[4072]: E0223 12:59:09.326293 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.427578 master-0 kubenswrapper[4072]: E0223 12:59:09.427465 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.528778 master-0 kubenswrapper[4072]: E0223 12:59:09.528578 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.629062 master-0 kubenswrapper[4072]: E0223 12:59:09.628965 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.696563 master-0 kubenswrapper[4072]: I0223 12:59:09.696473 4072 csr.go:261] certificate signing request csr-gkjcc is approved, waiting to be issued Feb 23 12:59:09.708753 master-0 kubenswrapper[4072]: I0223 12:59:09.708602 4072 csr.go:257] certificate signing request csr-gkjcc is issued Feb 23 12:59:09.730003 master-0 kubenswrapper[4072]: E0223 12:59:09.729913 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.830309 master-0 kubenswrapper[4072]: E0223 12:59:09.830077 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:09.931117 master-0 kubenswrapper[4072]: E0223 12:59:09.930989 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.031680 master-0 kubenswrapper[4072]: E0223 12:59:10.031580 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.132947 master-0 kubenswrapper[4072]: E0223 12:59:10.132718 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.233896 master-0 kubenswrapper[4072]: E0223 12:59:10.233791 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.334723 master-0 kubenswrapper[4072]: E0223 12:59:10.334603 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.435369 master-0 kubenswrapper[4072]: E0223 12:59:10.435147 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.536050 master-0 kubenswrapper[4072]: E0223 12:59:10.535949 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.637237 master-0 kubenswrapper[4072]: E0223 12:59:10.637154 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.711240 master-0 kubenswrapper[4072]: I0223 12:59:10.711160 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 06:49:51.404393114 +0000 UTC Feb 23 12:59:10.711240 master-0 kubenswrapper[4072]: I0223 12:59:10.711210 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h50m40.693188502s for next certificate rotation Feb 23 12:59:10.737997 master-0 kubenswrapper[4072]: E0223 12:59:10.737900 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.838753 master-0 kubenswrapper[4072]: E0223 12:59:10.838637 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:10.939188 master-0 kubenswrapper[4072]: E0223 12:59:10.939088 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.039807 master-0 kubenswrapper[4072]: E0223 12:59:11.039562 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.139800 master-0 kubenswrapper[4072]: E0223 12:59:11.139690 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.240672 master-0 kubenswrapper[4072]: E0223 12:59:11.240561 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.341487 master-0 kubenswrapper[4072]: E0223 12:59:11.341229 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.442340 master-0 kubenswrapper[4072]: E0223 12:59:11.442211 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.542447 master-0 kubenswrapper[4072]: E0223 12:59:11.542374 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.643180 master-0 kubenswrapper[4072]: E0223 12:59:11.642975 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.711979 master-0 kubenswrapper[4072]: I0223 12:59:11.711924 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 06:41:34.654589263 +0000 UTC Feb 23 12:59:11.711979 master-0 kubenswrapper[4072]: I0223 12:59:11.711980 4072 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h42m22.942616958s for next certificate rotation Feb 23 12:59:11.744007 master-0 kubenswrapper[4072]: E0223 12:59:11.743933 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.770189 master-0 kubenswrapper[4072]: I0223 12:59:11.770117 4072 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:11.807018 master-0 kubenswrapper[4072]: I0223 12:59:11.806422 4072 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:11.844838 master-0 kubenswrapper[4072]: E0223 12:59:11.844700 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:11.945587 master-0 kubenswrapper[4072]: E0223 12:59:11.945475 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.029229 master-0 kubenswrapper[4072]: I0223 12:59:12.029133 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:12.031192 master-0 kubenswrapper[4072]: I0223 12:59:12.031142 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:12.031192 master-0 kubenswrapper[4072]: I0223 12:59:12.031200 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:12.031192 master-0 kubenswrapper[4072]: I0223 12:59:12.031220 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:12.031814 master-0 kubenswrapper[4072]: I0223 12:59:12.031784 4072 scope.go:117] "RemoveContainer" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" Feb 23 12:59:12.032076 master-0 kubenswrapper[4072]: E0223 12:59:12.032038 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 23 12:59:12.046120 master-0 kubenswrapper[4072]: E0223 12:59:12.046049 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.146743 master-0 kubenswrapper[4072]: E0223 12:59:12.146565 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.247124 master-0 kubenswrapper[4072]: E0223 12:59:12.246961 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.348177 master-0 kubenswrapper[4072]: E0223 12:59:12.348109 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.448907 master-0 kubenswrapper[4072]: E0223 12:59:12.448809 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.549997 master-0 kubenswrapper[4072]: E0223 12:59:12.549823 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.650172 master-0 kubenswrapper[4072]: E0223 12:59:12.650068 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.751091 master-0 kubenswrapper[4072]: E0223 12:59:12.750999 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.851948 master-0 kubenswrapper[4072]: E0223 12:59:12.851804 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:12.953235 master-0 kubenswrapper[4072]: E0223 12:59:12.953152 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.008754 master-0 kubenswrapper[4072]: E0223 12:59:13.008714 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:13.053976 master-0 kubenswrapper[4072]: E0223 12:59:13.053896 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.154809 master-0 kubenswrapper[4072]: E0223 12:59:13.154598 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.254866 master-0 kubenswrapper[4072]: E0223 12:59:13.254763 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.355771 master-0 kubenswrapper[4072]: E0223 12:59:13.355677 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.456810 master-0 kubenswrapper[4072]: E0223 12:59:13.456672 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.557751 master-0 kubenswrapper[4072]: E0223 12:59:13.557605 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.658428 master-0 kubenswrapper[4072]: E0223 12:59:13.658295 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.759337 master-0 kubenswrapper[4072]: E0223 12:59:13.759080 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.859529 master-0 kubenswrapper[4072]: E0223 12:59:13.859388 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:13.960705 master-0 kubenswrapper[4072]: E0223 12:59:13.960574 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.061833 master-0 kubenswrapper[4072]: E0223 12:59:14.061625 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.162888 master-0 kubenswrapper[4072]: E0223 12:59:14.162759 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.263312 master-0 kubenswrapper[4072]: E0223 12:59:14.263190 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.363684 master-0 kubenswrapper[4072]: E0223 12:59:14.363488 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.464652 master-0 kubenswrapper[4072]: E0223 12:59:14.464482 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.565696 master-0 kubenswrapper[4072]: E0223 12:59:14.565544 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.666031 master-0 kubenswrapper[4072]: E0223 12:59:14.665773 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.767006 master-0 kubenswrapper[4072]: E0223 12:59:14.766872 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.867793 master-0 kubenswrapper[4072]: E0223 12:59:14.867661 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:14.968723 master-0 kubenswrapper[4072]: E0223 12:59:14.968604 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.069094 master-0 kubenswrapper[4072]: E0223 12:59:15.068986 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.169118 master-0 kubenswrapper[4072]: E0223 12:59:15.169063 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.270082 master-0 kubenswrapper[4072]: E0223 12:59:15.269849 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.371192 master-0 kubenswrapper[4072]: E0223 12:59:15.371074 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.472110 master-0 kubenswrapper[4072]: E0223 12:59:15.472026 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.573084 master-0 kubenswrapper[4072]: E0223 12:59:15.572871 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.673712 master-0 kubenswrapper[4072]: E0223 12:59:15.673585 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.774817 master-0 kubenswrapper[4072]: E0223 12:59:15.774718 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.875343 master-0 kubenswrapper[4072]: E0223 12:59:15.875165 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:15.975508 master-0 kubenswrapper[4072]: E0223 12:59:15.975395 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.075867 master-0 kubenswrapper[4072]: E0223 12:59:16.075756 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.176132 master-0 kubenswrapper[4072]: E0223 12:59:16.175916 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.277274 master-0 kubenswrapper[4072]: E0223 12:59:16.277109 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.377469 master-0 kubenswrapper[4072]: E0223 12:59:16.377350 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.478438 master-0 kubenswrapper[4072]: E0223 12:59:16.478366 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.579650 master-0 kubenswrapper[4072]: E0223 12:59:16.579528 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.680525 master-0 kubenswrapper[4072]: E0223 12:59:16.680374 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.781353 master-0 kubenswrapper[4072]: E0223 12:59:16.781102 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.882174 master-0 kubenswrapper[4072]: E0223 12:59:16.882050 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:16.983491 master-0 kubenswrapper[4072]: E0223 12:59:16.983289 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.084106 master-0 kubenswrapper[4072]: E0223 12:59:17.083958 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.185353 master-0 kubenswrapper[4072]: E0223 12:59:17.185138 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.285791 master-0 kubenswrapper[4072]: E0223 12:59:17.285698 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.386356 master-0 kubenswrapper[4072]: E0223 12:59:17.386138 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.487098 master-0 kubenswrapper[4072]: E0223 12:59:17.486966 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.588174 master-0 kubenswrapper[4072]: E0223 12:59:17.588054 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.688907 master-0 kubenswrapper[4072]: E0223 12:59:17.688787 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.761304 master-0 kubenswrapper[4072]: E0223 12:59:17.761206 4072 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 23 12:59:17.789469 master-0 kubenswrapper[4072]: E0223 12:59:17.789348 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.890437 master-0 kubenswrapper[4072]: E0223 12:59:17.890320 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:17.990722 master-0 kubenswrapper[4072]: E0223 12:59:17.990430 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.091500 master-0 kubenswrapper[4072]: E0223 12:59:18.091348 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.192614 master-0 kubenswrapper[4072]: E0223 12:59:18.192468 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.293226 master-0 kubenswrapper[4072]: E0223 12:59:18.292893 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.393296 master-0 kubenswrapper[4072]: E0223 12:59:18.393136 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.494045 master-0 kubenswrapper[4072]: E0223 12:59:18.493924 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.595132 master-0 kubenswrapper[4072]: E0223 12:59:18.594928 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.696146 master-0 kubenswrapper[4072]: E0223 12:59:18.696053 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.796714 master-0 kubenswrapper[4072]: E0223 12:59:18.796591 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.897823 master-0 kubenswrapper[4072]: E0223 12:59:18.897558 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:18.997834 master-0 kubenswrapper[4072]: E0223 12:59:18.997739 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.099120 master-0 kubenswrapper[4072]: E0223 12:59:19.098998 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.200109 master-0 kubenswrapper[4072]: E0223 12:59:19.200003 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.300994 master-0 kubenswrapper[4072]: E0223 12:59:19.300852 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.401574 master-0 kubenswrapper[4072]: E0223 12:59:19.401452 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.501900 master-0 kubenswrapper[4072]: E0223 12:59:19.501650 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.602114 master-0 kubenswrapper[4072]: E0223 12:59:19.602015 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.702878 master-0 kubenswrapper[4072]: E0223 12:59:19.702760 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.803216 master-0 kubenswrapper[4072]: E0223 12:59:19.802995 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:19.903797 master-0 kubenswrapper[4072]: E0223 12:59:19.903700 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.004932 master-0 kubenswrapper[4072]: E0223 12:59:20.004838 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.105547 master-0 kubenswrapper[4072]: E0223 12:59:20.105287 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.205484 master-0 kubenswrapper[4072]: E0223 12:59:20.205399 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.305966 master-0 kubenswrapper[4072]: E0223 12:59:20.305865 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.406630 master-0 kubenswrapper[4072]: E0223 12:59:20.406499 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.507464 master-0 kubenswrapper[4072]: E0223 12:59:20.507346 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.608320 master-0 kubenswrapper[4072]: E0223 12:59:20.608215 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.708799 master-0 kubenswrapper[4072]: E0223 12:59:20.708736 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.809924 master-0 kubenswrapper[4072]: E0223 12:59:20.809641 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:20.910467 master-0 kubenswrapper[4072]: E0223 12:59:20.910344 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.011003 master-0 kubenswrapper[4072]: E0223 12:59:21.010795 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.111502 master-0 kubenswrapper[4072]: E0223 12:59:21.111367 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.212521 master-0 kubenswrapper[4072]: E0223 12:59:21.212365 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.312851 master-0 kubenswrapper[4072]: E0223 12:59:21.312628 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.413440 master-0 kubenswrapper[4072]: E0223 12:59:21.413357 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.514612 master-0 kubenswrapper[4072]: E0223 12:59:21.514506 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.615877 master-0 kubenswrapper[4072]: E0223 12:59:21.615664 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.716957 master-0 kubenswrapper[4072]: E0223 12:59:21.716835 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.817217 master-0 kubenswrapper[4072]: E0223 12:59:21.817084 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:21.918286 master-0 kubenswrapper[4072]: E0223 12:59:21.918060 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.019310 master-0 kubenswrapper[4072]: E0223 12:59:22.019152 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.119609 master-0 kubenswrapper[4072]: E0223 12:59:22.119486 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.220407 master-0 kubenswrapper[4072]: E0223 12:59:22.220315 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.321364 master-0 kubenswrapper[4072]: E0223 12:59:22.321300 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.421777 master-0 kubenswrapper[4072]: E0223 12:59:22.421663 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.522994 master-0 kubenswrapper[4072]: E0223 12:59:22.522704 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.623777 master-0 kubenswrapper[4072]: E0223 12:59:22.623646 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.724574 master-0 kubenswrapper[4072]: E0223 12:59:22.724478 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.825080 master-0 kubenswrapper[4072]: E0223 12:59:22.824878 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:22.925374 master-0 kubenswrapper[4072]: E0223 12:59:22.925279 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.009128 master-0 kubenswrapper[4072]: E0223 12:59:23.009028 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:23.026435 master-0 kubenswrapper[4072]: E0223 12:59:23.026356 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.127646 master-0 kubenswrapper[4072]: E0223 12:59:23.127456 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.227860 master-0 kubenswrapper[4072]: E0223 12:59:23.227723 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.328791 master-0 kubenswrapper[4072]: E0223 12:59:23.328640 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.429134 master-0 kubenswrapper[4072]: E0223 12:59:23.428877 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.529678 master-0 kubenswrapper[4072]: E0223 12:59:23.529605 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.630400 master-0 kubenswrapper[4072]: E0223 12:59:23.630287 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.731090 master-0 kubenswrapper[4072]: E0223 12:59:23.730959 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.831766 master-0 kubenswrapper[4072]: E0223 12:59:23.831664 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:23.932587 master-0 kubenswrapper[4072]: E0223 12:59:23.932437 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.033046 master-0 kubenswrapper[4072]: E0223 12:59:24.032831 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.133546 master-0 kubenswrapper[4072]: E0223 12:59:24.133408 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.234493 master-0 kubenswrapper[4072]: E0223 12:59:24.234327 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.334932 master-0 kubenswrapper[4072]: E0223 12:59:24.334756 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.435993 master-0 kubenswrapper[4072]: E0223 12:59:24.435864 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.536866 master-0 kubenswrapper[4072]: E0223 12:59:24.536765 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.637943 master-0 kubenswrapper[4072]: E0223 12:59:24.637723 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.738287 master-0 kubenswrapper[4072]: E0223 12:59:24.738142 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.839291 master-0 kubenswrapper[4072]: E0223 12:59:24.839147 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:24.939505 master-0 kubenswrapper[4072]: E0223 12:59:24.939395 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.039710 master-0 kubenswrapper[4072]: E0223 12:59:25.039650 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.140745 master-0 kubenswrapper[4072]: E0223 12:59:25.140627 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.241212 master-0 kubenswrapper[4072]: E0223 12:59:25.240953 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.342238 master-0 kubenswrapper[4072]: E0223 12:59:25.342097 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.443287 master-0 kubenswrapper[4072]: E0223 12:59:25.443143 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.544335 master-0 kubenswrapper[4072]: E0223 12:59:25.544070 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.645089 master-0 kubenswrapper[4072]: E0223 12:59:25.645008 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.745588 master-0 kubenswrapper[4072]: E0223 12:59:25.745535 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.845795 master-0 kubenswrapper[4072]: E0223 12:59:25.845654 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:25.946153 master-0 kubenswrapper[4072]: E0223 12:59:25.946082 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.028604 master-0 kubenswrapper[4072]: I0223 12:59:26.028555 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:26.029964 master-0 kubenswrapper[4072]: I0223 12:59:26.029930 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:26.030122 master-0 kubenswrapper[4072]: I0223 12:59:26.029986 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:26.030122 master-0 kubenswrapper[4072]: I0223 12:59:26.030003 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:26.030531 master-0 kubenswrapper[4072]: I0223 12:59:26.030491 4072 scope.go:117] "RemoveContainer" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" Feb 23 12:59:26.046283 master-0 kubenswrapper[4072]: E0223 12:59:26.046199 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.147522 master-0 kubenswrapper[4072]: E0223 12:59:26.147181 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.247836 master-0 kubenswrapper[4072]: E0223 12:59:26.247744 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.348973 master-0 kubenswrapper[4072]: E0223 12:59:26.348894 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.450042 master-0 kubenswrapper[4072]: E0223 12:59:26.449900 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.550746 master-0 kubenswrapper[4072]: E0223 12:59:26.550651 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.651840 master-0 kubenswrapper[4072]: E0223 12:59:26.651747 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.752962 master-0 kubenswrapper[4072]: E0223 12:59:26.752789 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.853221 master-0 kubenswrapper[4072]: E0223 12:59:26.853148 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:26.953664 master-0 kubenswrapper[4072]: E0223 12:59:26.953626 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.055002 master-0 kubenswrapper[4072]: E0223 12:59:27.054899 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.155852 master-0 kubenswrapper[4072]: E0223 12:59:27.155789 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.256581 master-0 kubenswrapper[4072]: E0223 12:59:27.256496 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.298423 master-0 kubenswrapper[4072]: I0223 12:59:27.298369 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 12:59:27.298978 master-0 kubenswrapper[4072]: I0223 12:59:27.298930 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"0bb705c5c9f04251f2f3ae5ef9f44d40f3c6c1b144c3946a4cd25703a7f7278f"} Feb 23 12:59:27.299114 master-0 kubenswrapper[4072]: I0223 12:59:27.299083 4072 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 12:59:27.302839 master-0 kubenswrapper[4072]: I0223 12:59:27.302749 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 12:59:27.302919 master-0 kubenswrapper[4072]: I0223 12:59:27.302880 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 12:59:27.302919 master-0 kubenswrapper[4072]: I0223 12:59:27.302912 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 12:59:27.357209 master-0 kubenswrapper[4072]: E0223 12:59:27.357050 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.457572 master-0 kubenswrapper[4072]: E0223 12:59:27.457437 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.558642 master-0 kubenswrapper[4072]: E0223 12:59:27.558536 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.659779 master-0 kubenswrapper[4072]: E0223 12:59:27.659568 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.760440 master-0 kubenswrapper[4072]: E0223 12:59:27.760309 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.861550 master-0 kubenswrapper[4072]: E0223 12:59:27.861443 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:27.961986 master-0 kubenswrapper[4072]: E0223 12:59:27.961897 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.054117 master-0 kubenswrapper[4072]: E0223 12:59:28.054030 4072 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 23 12:59:28.075674 master-0 kubenswrapper[4072]: E0223 12:59:28.075603 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.176175 master-0 kubenswrapper[4072]: E0223 12:59:28.176068 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.276830 master-0 kubenswrapper[4072]: E0223 12:59:28.276580 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.377686 master-0 kubenswrapper[4072]: E0223 12:59:28.377497 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.478649 master-0 kubenswrapper[4072]: E0223 12:59:28.478464 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.579754 master-0 kubenswrapper[4072]: E0223 12:59:28.579539 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.680754 master-0 kubenswrapper[4072]: E0223 12:59:28.680562 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.781228 master-0 kubenswrapper[4072]: E0223 12:59:28.781113 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.881751 master-0 kubenswrapper[4072]: E0223 12:59:28.881562 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:28.981847 master-0 kubenswrapper[4072]: E0223 12:59:28.981744 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.082679 master-0 kubenswrapper[4072]: E0223 12:59:29.082547 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.183747 master-0 kubenswrapper[4072]: E0223 12:59:29.183528 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.283762 master-0 kubenswrapper[4072]: E0223 12:59:29.283675 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.384635 master-0 kubenswrapper[4072]: E0223 12:59:29.384346 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.485469 master-0 kubenswrapper[4072]: E0223 12:59:29.485019 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.586089 master-0 kubenswrapper[4072]: E0223 12:59:29.586016 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.687215 master-0 kubenswrapper[4072]: E0223 12:59:29.687099 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.788095 master-0 kubenswrapper[4072]: E0223 12:59:29.787916 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.888782 master-0 kubenswrapper[4072]: E0223 12:59:29.888670 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:29.988884 master-0 kubenswrapper[4072]: E0223 12:59:29.988809 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.090078 master-0 kubenswrapper[4072]: E0223 12:59:30.089935 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.190414 master-0 kubenswrapper[4072]: E0223 12:59:30.190329 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.291594 master-0 kubenswrapper[4072]: E0223 12:59:30.291483 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.391856 master-0 kubenswrapper[4072]: E0223 12:59:30.391699 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.493396 master-0 kubenswrapper[4072]: E0223 12:59:30.492611 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.593493 master-0 kubenswrapper[4072]: E0223 12:59:30.593390 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.694428 master-0 kubenswrapper[4072]: E0223 12:59:30.694325 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.794542 master-0 kubenswrapper[4072]: E0223 12:59:30.794439 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.895306 master-0 kubenswrapper[4072]: E0223 12:59:30.895134 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:30.996381 master-0 kubenswrapper[4072]: E0223 12:59:30.996185 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.097047 master-0 kubenswrapper[4072]: E0223 12:59:31.096941 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.197901 master-0 kubenswrapper[4072]: E0223 12:59:31.197807 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.298146 master-0 kubenswrapper[4072]: E0223 12:59:31.297957 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.398614 master-0 kubenswrapper[4072]: E0223 12:59:31.398515 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.499483 master-0 kubenswrapper[4072]: E0223 12:59:31.499374 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.600566 master-0 kubenswrapper[4072]: E0223 12:59:31.600351 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.701473 master-0 kubenswrapper[4072]: E0223 12:59:31.701361 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.801949 master-0 kubenswrapper[4072]: E0223 12:59:31.801850 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:31.902954 master-0 kubenswrapper[4072]: E0223 12:59:31.902744 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.003654 master-0 kubenswrapper[4072]: E0223 12:59:32.003538 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.104076 master-0 kubenswrapper[4072]: E0223 12:59:32.103961 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.204982 master-0 kubenswrapper[4072]: E0223 12:59:32.204839 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.306077 master-0 kubenswrapper[4072]: E0223 12:59:32.305978 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.406396 master-0 kubenswrapper[4072]: E0223 12:59:32.406314 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.507170 master-0 kubenswrapper[4072]: E0223 12:59:32.507023 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.607967 master-0 kubenswrapper[4072]: E0223 12:59:32.607883 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.708239 master-0 kubenswrapper[4072]: E0223 12:59:32.708107 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.808855 master-0 kubenswrapper[4072]: E0223 12:59:32.808662 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:32.909201 master-0 kubenswrapper[4072]: E0223 12:59:32.909085 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.009685 master-0 kubenswrapper[4072]: E0223 12:59:33.009564 4072 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 23 12:59:33.009685 master-0 kubenswrapper[4072]: E0223 12:59:33.009637 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.110859 master-0 kubenswrapper[4072]: E0223 12:59:33.110695 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.211596 master-0 kubenswrapper[4072]: E0223 12:59:33.211530 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.312341 master-0 kubenswrapper[4072]: E0223 12:59:33.312197 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.413500 master-0 kubenswrapper[4072]: E0223 12:59:33.413342 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.514341 master-0 kubenswrapper[4072]: E0223 12:59:33.514302 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.615142 master-0 kubenswrapper[4072]: E0223 12:59:33.615050 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.715940 master-0 kubenswrapper[4072]: E0223 12:59:33.715856 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.816370 master-0 kubenswrapper[4072]: E0223 12:59:33.816282 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:33.917461 master-0 kubenswrapper[4072]: E0223 12:59:33.917338 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.018534 master-0 kubenswrapper[4072]: E0223 12:59:34.018323 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.119144 master-0 kubenswrapper[4072]: E0223 12:59:34.119050 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.220286 master-0 kubenswrapper[4072]: E0223 12:59:34.220183 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.321537 master-0 kubenswrapper[4072]: E0223 12:59:34.321363 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.422186 master-0 kubenswrapper[4072]: E0223 12:59:34.422089 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.523065 master-0 kubenswrapper[4072]: E0223 12:59:34.522991 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.623984 master-0 kubenswrapper[4072]: E0223 12:59:34.623837 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.724778 master-0 kubenswrapper[4072]: E0223 12:59:34.724658 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.825190 master-0 kubenswrapper[4072]: E0223 12:59:34.825072 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:34.926335 master-0 kubenswrapper[4072]: E0223 12:59:34.926148 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.026799 master-0 kubenswrapper[4072]: E0223 12:59:35.026669 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.127349 master-0 kubenswrapper[4072]: E0223 12:59:35.127221 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.228224 master-0 kubenswrapper[4072]: E0223 12:59:35.228102 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.329158 master-0 kubenswrapper[4072]: E0223 12:59:35.329063 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.429341 master-0 kubenswrapper[4072]: E0223 12:59:35.429227 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.530462 master-0 kubenswrapper[4072]: E0223 12:59:35.530337 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.631320 master-0 kubenswrapper[4072]: E0223 12:59:35.631233 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.732429 master-0 kubenswrapper[4072]: E0223 12:59:35.732352 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.833056 master-0 kubenswrapper[4072]: E0223 12:59:35.832905 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:35.933649 master-0 kubenswrapper[4072]: E0223 12:59:35.933560 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.013225 master-0 kubenswrapper[4072]: I0223 12:59:36.013137 4072 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:36.034030 master-0 kubenswrapper[4072]: E0223 12:59:36.033951 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.134533 master-0 kubenswrapper[4072]: E0223 12:59:36.134363 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.235412 master-0 kubenswrapper[4072]: E0223 12:59:36.235318 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.336147 master-0 kubenswrapper[4072]: E0223 12:59:36.336067 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.437005 master-0 kubenswrapper[4072]: E0223 12:59:36.436875 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.537042 master-0 kubenswrapper[4072]: E0223 12:59:36.536945 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.638014 master-0 kubenswrapper[4072]: E0223 12:59:36.637928 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.738445 master-0 kubenswrapper[4072]: E0223 12:59:36.738339 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.838605 master-0 kubenswrapper[4072]: E0223 12:59:36.838470 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:36.939471 master-0 kubenswrapper[4072]: E0223 12:59:36.939353 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.039663 master-0 kubenswrapper[4072]: E0223 12:59:37.039462 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.140605 master-0 kubenswrapper[4072]: E0223 12:59:37.140450 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.240702 master-0 kubenswrapper[4072]: E0223 12:59:37.240602 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.341795 master-0 kubenswrapper[4072]: E0223 12:59:37.341656 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.442685 master-0 kubenswrapper[4072]: E0223 12:59:37.442545 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.543396 master-0 kubenswrapper[4072]: E0223 12:59:37.543297 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.644329 master-0 kubenswrapper[4072]: E0223 12:59:37.644098 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.744701 master-0 kubenswrapper[4072]: E0223 12:59:37.744587 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.845717 master-0 kubenswrapper[4072]: E0223 12:59:37.845618 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:37.946031 master-0 kubenswrapper[4072]: E0223 12:59:37.945951 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.046465 master-0 kubenswrapper[4072]: E0223 12:59:38.046405 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.134120 master-0 kubenswrapper[4072]: E0223 12:59:38.134015 4072 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 23 12:59:38.150220 master-0 kubenswrapper[4072]: E0223 12:59:38.150136 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.250516 master-0 kubenswrapper[4072]: E0223 12:59:38.250345 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.350727 master-0 kubenswrapper[4072]: E0223 12:59:38.350678 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.450836 master-0 kubenswrapper[4072]: E0223 12:59:38.450788 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.552040 master-0 kubenswrapper[4072]: E0223 12:59:38.551871 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.653055 master-0 kubenswrapper[4072]: E0223 12:59:38.652913 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.753434 master-0 kubenswrapper[4072]: E0223 12:59:38.753329 4072 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 23 12:59:38.817077 master-0 kubenswrapper[4072]: I0223 12:59:38.816886 4072 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 12:59:38.878028 master-0 kubenswrapper[4072]: I0223 12:59:38.877648 4072 apiserver.go:52] "Watching apiserver" Feb 23 12:59:38.884342 master-0 kubenswrapper[4072]: I0223 12:59:38.884234 4072 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 12:59:38.884687 master-0 kubenswrapper[4072]: I0223 12:59:38.884588 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-mtn6f","openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7","openshift-network-operator/network-operator-7d7db75979-rmsq8"] Feb 23 12:59:38.885286 master-0 kubenswrapper[4072]: I0223 12:59:38.885233 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:38.885642 master-0 kubenswrapper[4072]: I0223 12:59:38.885392 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:38.886286 master-0 kubenswrapper[4072]: I0223 12:59:38.886206 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:38.887825 master-0 kubenswrapper[4072]: I0223 12:59:38.887743 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 12:59:38.888539 master-0 kubenswrapper[4072]: I0223 12:59:38.888492 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 23 12:59:38.888771 master-0 kubenswrapper[4072]: I0223 12:59:38.888542 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 12:59:38.888911 master-0 kubenswrapper[4072]: I0223 12:59:38.888863 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 12:59:38.889483 master-0 kubenswrapper[4072]: I0223 12:59:38.889434 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 12:59:38.889588 master-0 kubenswrapper[4072]: I0223 12:59:38.889496 4072 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 23 12:59:38.889897 master-0 kubenswrapper[4072]: I0223 12:59:38.889840 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 12:59:38.890107 master-0 kubenswrapper[4072]: I0223 12:59:38.890032 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 12:59:38.890189 master-0 kubenswrapper[4072]: I0223 12:59:38.890120 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 23 12:59:38.891556 master-0 kubenswrapper[4072]: I0223 12:59:38.891508 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 23 12:59:38.947043 master-0 kubenswrapper[4072]: I0223 12:59:38.946975 4072 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 12:59:39.007866 master-0 kubenswrapper[4072]: I0223 12:59:39.007681 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.007866 master-0 kubenswrapper[4072]: I0223 12:59:39.007778 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.007866 master-0 kubenswrapper[4072]: I0223 12:59:39.007833 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.007988 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008074 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008119 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxdn\" (UniqueName: \"kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008159 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008196 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008232 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.008482 master-0 kubenswrapper[4072]: I0223 12:59:39.008399 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.008891 master-0 kubenswrapper[4072]: I0223 12:59:39.008500 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.008891 master-0 kubenswrapper[4072]: I0223 12:59:39.008560 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.008891 master-0 kubenswrapper[4072]: I0223 12:59:39.008624 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109009 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109077 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxdn\" (UniqueName: \"kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109114 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109147 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109192 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109223 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109287 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109318 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109353 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.109352 master-0 kubenswrapper[4072]: I0223 12:59:39.109386 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.109421 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.109453 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.109689 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.109807 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.109893 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: E0223 12:59:39.109848 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: E0223 12:59:39.110025 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:39.609979024 +0000 UTC m=+67.420135666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:39.110209 master-0 kubenswrapper[4072]: I0223 12:59:39.110195 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.110746 master-0 kubenswrapper[4072]: I0223 12:59:39.110354 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.110746 master-0 kubenswrapper[4072]: I0223 12:59:39.110501 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.110746 master-0 kubenswrapper[4072]: I0223 12:59:39.110671 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.110929 master-0 kubenswrapper[4072]: I0223 12:59:39.110866 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.111791 master-0 kubenswrapper[4072]: I0223 12:59:39.111691 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.112855 master-0 kubenswrapper[4072]: I0223 12:59:39.112774 4072 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 12:59:39.120932 master-0 kubenswrapper[4072]: I0223 12:59:39.120870 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.141021 master-0 kubenswrapper[4072]: I0223 12:59:39.140868 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxdn\" (UniqueName: \"kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn\") pod \"assisted-installer-controller-mtn6f\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.141616 master-0 kubenswrapper[4072]: I0223 12:59:39.141568 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.141840 master-0 kubenswrapper[4072]: I0223 12:59:39.141794 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.209776 master-0 kubenswrapper[4072]: I0223 12:59:39.209656 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 12:59:39.228189 master-0 kubenswrapper[4072]: W0223 12:59:39.228109 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85958edf_e3da_4704_8f09_cf049101f2e6.slice/crio-1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7 WatchSource:0}: Error finding container 1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7: Status 404 returned error can't find the container with id 1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7 Feb 23 12:59:39.267271 master-0 kubenswrapper[4072]: I0223 12:59:39.267164 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:39.289220 master-0 kubenswrapper[4072]: W0223 12:59:39.289128 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf533d847_cace_4951_b6f0_f7dc82ca9454.slice/crio-b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b WatchSource:0}: Error finding container b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b: Status 404 returned error can't find the container with id b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b Feb 23 12:59:39.334493 master-0 kubenswrapper[4072]: I0223 12:59:39.334398 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mtn6f" event={"ID":"f533d847-cace-4951-b6f0-f7dc82ca9454","Type":"ContainerStarted","Data":"b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b"} Feb 23 12:59:39.335933 master-0 kubenswrapper[4072]: I0223 12:59:39.335882 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerStarted","Data":"1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7"} Feb 23 12:59:39.614317 master-0 kubenswrapper[4072]: I0223 12:59:39.614210 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:39.614576 master-0 kubenswrapper[4072]: E0223 12:59:39.614447 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:39.614576 master-0 kubenswrapper[4072]: E0223 12:59:39.614550 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:40.614521338 +0000 UTC m=+68.424677990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:40.621156 master-0 kubenswrapper[4072]: I0223 12:59:40.621072 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:40.621758 master-0 kubenswrapper[4072]: E0223 12:59:40.621347 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:40.621758 master-0 kubenswrapper[4072]: E0223 12:59:40.621483 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:42.62145441 +0000 UTC m=+70.431611052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:42.635897 master-0 kubenswrapper[4072]: I0223 12:59:42.635776 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:42.637015 master-0 kubenswrapper[4072]: E0223 12:59:42.636056 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:42.637015 master-0 kubenswrapper[4072]: E0223 12:59:42.636305 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:46.636183163 +0000 UTC m=+74.446339815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:44.350081 master-0 kubenswrapper[4072]: I0223 12:59:44.349680 4072 generic.go:334] "Generic (PLEG): container finished" podID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerID="43e1e42f0f51b9501eada9df5600a37753dcd2c27cc6181d29c70a1a9b841cdd" exitCode=0 Feb 23 12:59:44.350081 master-0 kubenswrapper[4072]: I0223 12:59:44.349721 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mtn6f" event={"ID":"f533d847-cace-4951-b6f0-f7dc82ca9454","Type":"ContainerDied","Data":"43e1e42f0f51b9501eada9df5600a37753dcd2c27cc6181d29c70a1a9b841cdd"} Feb 23 12:59:45.356547 master-0 kubenswrapper[4072]: I0223 12:59:45.355958 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerStarted","Data":"bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f"} Feb 23 12:59:45.379630 master-0 kubenswrapper[4072]: I0223 12:59:45.379176 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" podStartSLOduration=33.529840278 podStartE2EDuration="38.379145764s" podCreationTimestamp="2026-02-23 12:59:07 +0000 UTC" firstStartedPulling="2026-02-23 12:59:39.231365735 +0000 UTC m=+67.041522377" lastFinishedPulling="2026-02-23 12:59:44.080671211 +0000 UTC m=+71.890827863" observedRunningTime="2026-02-23 12:59:45.379121213 +0000 UTC m=+73.189277865" watchObservedRunningTime="2026-02-23 12:59:45.379145764 +0000 UTC m=+73.189302406" Feb 23 12:59:45.383175 master-0 kubenswrapper[4072]: I0223 12:59:45.383111 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:45.454457 master-0 kubenswrapper[4072]: I0223 12:59:45.454368 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files\") pod \"f533d847-cace-4951-b6f0-f7dc82ca9454\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " Feb 23 12:59:45.454457 master-0 kubenswrapper[4072]: I0223 12:59:45.454437 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf\") pod \"f533d847-cace-4951-b6f0-f7dc82ca9454\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454488 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf\") pod \"f533d847-cace-4951-b6f0-f7dc82ca9454\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454514 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "f533d847-cace-4951-b6f0-f7dc82ca9454" (UID: "f533d847-cace-4951-b6f0-f7dc82ca9454"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454557 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "f533d847-cace-4951-b6f0-f7dc82ca9454" (UID: "f533d847-cace-4951-b6f0-f7dc82ca9454"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454537 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle\") pod \"f533d847-cace-4951-b6f0-f7dc82ca9454\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454599 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "f533d847-cace-4951-b6f0-f7dc82ca9454" (UID: "f533d847-cace-4951-b6f0-f7dc82ca9454"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454630 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxdn\" (UniqueName: \"kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn\") pod \"f533d847-cace-4951-b6f0-f7dc82ca9454\" (UID: \"f533d847-cace-4951-b6f0-f7dc82ca9454\") " Feb 23 12:59:45.454790 master-0 kubenswrapper[4072]: I0223 12:59:45.454701 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "f533d847-cace-4951-b6f0-f7dc82ca9454" (UID: "f533d847-cace-4951-b6f0-f7dc82ca9454"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 12:59:45.455230 master-0 kubenswrapper[4072]: I0223 12:59:45.454744 4072 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:45.455230 master-0 kubenswrapper[4072]: I0223 12:59:45.454866 4072 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:45.455230 master-0 kubenswrapper[4072]: I0223 12:59:45.454889 4072 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:45.460450 master-0 kubenswrapper[4072]: I0223 12:59:45.460381 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn" (OuterVolumeSpecName: "kube-api-access-jqxdn") pod "f533d847-cace-4951-b6f0-f7dc82ca9454" (UID: "f533d847-cace-4951-b6f0-f7dc82ca9454"). InnerVolumeSpecName "kube-api-access-jqxdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 12:59:45.555861 master-0 kubenswrapper[4072]: I0223 12:59:45.555761 4072 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f533d847-cace-4951-b6f0-f7dc82ca9454-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:45.555861 master-0 kubenswrapper[4072]: I0223 12:59:45.555823 4072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqxdn\" (UniqueName: \"kubernetes.io/projected/f533d847-cace-4951-b6f0-f7dc82ca9454-kube-api-access-jqxdn\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:46.361872 master-0 kubenswrapper[4072]: I0223 12:59:46.361779 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mtn6f" event={"ID":"f533d847-cace-4951-b6f0-f7dc82ca9454","Type":"ContainerDied","Data":"b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b"} Feb 23 12:59:46.361872 master-0 kubenswrapper[4072]: I0223 12:59:46.361846 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 12:59:46.362800 master-0 kubenswrapper[4072]: I0223 12:59:46.361870 4072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b" Feb 23 12:59:46.664973 master-0 kubenswrapper[4072]: I0223 12:59:46.664762 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:46.665220 master-0 kubenswrapper[4072]: E0223 12:59:46.665035 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:46.665220 master-0 kubenswrapper[4072]: E0223 12:59:46.665178 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:54.665137196 +0000 UTC m=+82.475293838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:47.251410 master-0 kubenswrapper[4072]: I0223 12:59:47.251335 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-fvj2t"] Feb 23 12:59:47.251717 master-0 kubenswrapper[4072]: E0223 12:59:47.251481 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 12:59:47.251717 master-0 kubenswrapper[4072]: I0223 12:59:47.251509 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 12:59:47.251717 master-0 kubenswrapper[4072]: I0223 12:59:47.251559 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 12:59:47.251935 master-0 kubenswrapper[4072]: I0223 12:59:47.251898 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:47.372188 master-0 kubenswrapper[4072]: I0223 12:59:47.372019 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dg4\" (UniqueName: \"kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4\") pod \"mtu-prober-fvj2t\" (UID: \"a8c56df7-2c8d-40d1-b737-7fa8cc661b81\") " pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:47.473419 master-0 kubenswrapper[4072]: I0223 12:59:47.473307 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dg4\" (UniqueName: \"kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4\") pod \"mtu-prober-fvj2t\" (UID: \"a8c56df7-2c8d-40d1-b737-7fa8cc661b81\") " pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:47.502933 master-0 kubenswrapper[4072]: I0223 12:59:47.502726 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dg4\" (UniqueName: \"kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4\") pod \"mtu-prober-fvj2t\" (UID: \"a8c56df7-2c8d-40d1-b737-7fa8cc661b81\") " pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:47.578441 master-0 kubenswrapper[4072]: I0223 12:59:47.578322 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:47.595626 master-0 kubenswrapper[4072]: W0223 12:59:47.595545 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c56df7_2c8d_40d1_b737_7fa8cc661b81.slice/crio-9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4 WatchSource:0}: Error finding container 9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4: Status 404 returned error can't find the container with id 9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4 Feb 23 12:59:48.369787 master-0 kubenswrapper[4072]: I0223 12:59:48.369461 4072 generic.go:334] "Generic (PLEG): container finished" podID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerID="db83ef82ac155acc22a9f418d8c50d6b04cf844595b5d8cd37f345df9398fd8f" exitCode=0 Feb 23 12:59:48.369787 master-0 kubenswrapper[4072]: I0223 12:59:48.369586 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fvj2t" event={"ID":"a8c56df7-2c8d-40d1-b737-7fa8cc661b81","Type":"ContainerDied","Data":"db83ef82ac155acc22a9f418d8c50d6b04cf844595b5d8cd37f345df9398fd8f"} Feb 23 12:59:48.369787 master-0 kubenswrapper[4072]: I0223 12:59:48.369799 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fvj2t" event={"ID":"a8c56df7-2c8d-40d1-b737-7fa8cc661b81","Type":"ContainerStarted","Data":"9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4"} Feb 23 12:59:49.400339 master-0 kubenswrapper[4072]: I0223 12:59:49.400222 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:49.492211 master-0 kubenswrapper[4072]: I0223 12:59:49.492117 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6dg4\" (UniqueName: \"kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4\") pod \"a8c56df7-2c8d-40d1-b737-7fa8cc661b81\" (UID: \"a8c56df7-2c8d-40d1-b737-7fa8cc661b81\") " Feb 23 12:59:49.497712 master-0 kubenswrapper[4072]: I0223 12:59:49.497637 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4" (OuterVolumeSpecName: "kube-api-access-l6dg4") pod "a8c56df7-2c8d-40d1-b737-7fa8cc661b81" (UID: "a8c56df7-2c8d-40d1-b737-7fa8cc661b81"). InnerVolumeSpecName "kube-api-access-l6dg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 12:59:49.593119 master-0 kubenswrapper[4072]: I0223 12:59:49.593032 4072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6dg4\" (UniqueName: \"kubernetes.io/projected/a8c56df7-2c8d-40d1-b737-7fa8cc661b81-kube-api-access-l6dg4\") on node \"master-0\" DevicePath \"\"" Feb 23 12:59:50.377084 master-0 kubenswrapper[4072]: I0223 12:59:50.376973 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fvj2t" event={"ID":"a8c56df7-2c8d-40d1-b737-7fa8cc661b81","Type":"ContainerDied","Data":"9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4"} Feb 23 12:59:50.377084 master-0 kubenswrapper[4072]: I0223 12:59:50.377039 4072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4" Feb 23 12:59:50.377084 master-0 kubenswrapper[4072]: I0223 12:59:50.377060 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fvj2t" Feb 23 12:59:52.275091 master-0 kubenswrapper[4072]: I0223 12:59:52.275029 4072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-fvj2t"] Feb 23 12:59:52.279686 master-0 kubenswrapper[4072]: I0223 12:59:52.279650 4072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-fvj2t"] Feb 23 12:59:53.034665 master-0 kubenswrapper[4072]: I0223 12:59:53.034588 4072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" path="/var/lib/kubelet/pods/a8c56df7-2c8d-40d1-b737-7fa8cc661b81/volumes" Feb 23 12:59:54.732819 master-0 kubenswrapper[4072]: I0223 12:59:54.732734 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 12:59:54.733622 master-0 kubenswrapper[4072]: E0223 12:59:54.732969 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:54.733622 master-0 kubenswrapper[4072]: E0223 12:59:54.733088 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:10.733055173 +0000 UTC m=+98.543211835 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 12:59:57.164908 master-0 kubenswrapper[4072]: I0223 12:59:57.164793 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rmz8z"] Feb 23 12:59:57.165970 master-0 kubenswrapper[4072]: E0223 12:59:57.164947 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 12:59:57.165970 master-0 kubenswrapper[4072]: I0223 12:59:57.164974 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 12:59:57.165970 master-0 kubenswrapper[4072]: I0223 12:59:57.165026 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 12:59:57.165970 master-0 kubenswrapper[4072]: I0223 12:59:57.165430 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.169216 master-0 kubenswrapper[4072]: I0223 12:59:57.169163 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 12:59:57.170185 master-0 kubenswrapper[4072]: I0223 12:59:57.170132 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 12:59:57.171016 master-0 kubenswrapper[4072]: I0223 12:59:57.170892 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 12:59:57.172914 master-0 kubenswrapper[4072]: I0223 12:59:57.172880 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 12:59:57.251100 master-0 kubenswrapper[4072]: I0223 12:59:57.250990 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251100 master-0 kubenswrapper[4072]: I0223 12:59:57.251070 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251100 master-0 kubenswrapper[4072]: I0223 12:59:57.251112 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251149 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251186 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251367 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251439 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251487 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251513 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.251538 master-0 kubenswrapper[4072]: I0223 12:59:57.251543 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251572 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251601 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251622 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251658 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251681 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251724 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.252024 master-0 kubenswrapper[4072]: I0223 12:59:57.251744 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352667 master-0 kubenswrapper[4072]: I0223 12:59:57.352590 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352667 master-0 kubenswrapper[4072]: I0223 12:59:57.352671 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352693 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352715 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352732 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352749 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352765 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352781 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352799 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352817 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352835 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352853 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352868 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352885 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352903 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352919 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.352940 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.352997 master-0 kubenswrapper[4072]: I0223 12:59:57.353004 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.353962 master-0 kubenswrapper[4072]: I0223 12:59:57.353275 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.353962 master-0 kubenswrapper[4072]: I0223 12:59:57.353313 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.353962 master-0 kubenswrapper[4072]: I0223 12:59:57.353333 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.354367 master-0 kubenswrapper[4072]: I0223 12:59:57.354318 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.354667 master-0 kubenswrapper[4072]: I0223 12:59:57.354610 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.354970 master-0 kubenswrapper[4072]: I0223 12:59:57.354658 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355076 master-0 kubenswrapper[4072]: I0223 12:59:57.355037 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355143 master-0 kubenswrapper[4072]: I0223 12:59:57.354707 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355143 master-0 kubenswrapper[4072]: I0223 12:59:57.354744 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355143 master-0 kubenswrapper[4072]: I0223 12:59:57.355074 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355295 master-0 kubenswrapper[4072]: I0223 12:59:57.354794 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355295 master-0 kubenswrapper[4072]: I0223 12:59:57.355032 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355295 master-0 kubenswrapper[4072]: I0223 12:59:57.354483 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355295 master-0 kubenswrapper[4072]: I0223 12:59:57.354784 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.355843 master-0 kubenswrapper[4072]: I0223 12:59:57.355804 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.360675 master-0 kubenswrapper[4072]: I0223 12:59:57.360633 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f7cf9"] Feb 23 12:59:57.361210 master-0 kubenswrapper[4072]: I0223 12:59:57.361177 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.364996 master-0 kubenswrapper[4072]: I0223 12:59:57.364870 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 12:59:57.365199 master-0 kubenswrapper[4072]: I0223 12:59:57.365145 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 12:59:57.380226 master-0 kubenswrapper[4072]: I0223 12:59:57.380175 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.453778 master-0 kubenswrapper[4072]: I0223 12:59:57.453679 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.453935 master-0 kubenswrapper[4072]: I0223 12:59:57.453769 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.453935 master-0 kubenswrapper[4072]: I0223 12:59:57.453846 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.454070 master-0 kubenswrapper[4072]: I0223 12:59:57.453974 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.454070 master-0 kubenswrapper[4072]: I0223 12:59:57.454036 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.454187 master-0 kubenswrapper[4072]: I0223 12:59:57.454108 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.454187 master-0 kubenswrapper[4072]: I0223 12:59:57.454157 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.454355 master-0 kubenswrapper[4072]: I0223 12:59:57.454195 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.490170 master-0 kubenswrapper[4072]: I0223 12:59:57.490070 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmz8z" Feb 23 12:59:57.511169 master-0 kubenswrapper[4072]: W0223 12:59:57.511101 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b59f2a_7014_448c_9d3b_e38281f07dbc.slice/crio-f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072 WatchSource:0}: Error finding container f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072: Status 404 returned error can't find the container with id f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072 Feb 23 12:59:57.554693 master-0 kubenswrapper[4072]: I0223 12:59:57.554608 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.554693 master-0 kubenswrapper[4072]: I0223 12:59:57.554685 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.554859 master-0 kubenswrapper[4072]: I0223 12:59:57.554724 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555062 master-0 kubenswrapper[4072]: I0223 12:59:57.554995 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555210 master-0 kubenswrapper[4072]: I0223 12:59:57.555145 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555617 master-0 kubenswrapper[4072]: I0223 12:59:57.555569 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555715 master-0 kubenswrapper[4072]: I0223 12:59:57.555647 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555780 master-0 kubenswrapper[4072]: I0223 12:59:57.555727 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555849 master-0 kubenswrapper[4072]: I0223 12:59:57.555791 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.555916 master-0 kubenswrapper[4072]: I0223 12:59:57.555891 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.556051 master-0 kubenswrapper[4072]: I0223 12:59:57.556003 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.556137 master-0 kubenswrapper[4072]: I0223 12:59:57.555998 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.557169 master-0 kubenswrapper[4072]: I0223 12:59:57.557121 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.557308 master-0 kubenswrapper[4072]: I0223 12:59:57.557206 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.557638 master-0 kubenswrapper[4072]: I0223 12:59:57.557601 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.586113 master-0 kubenswrapper[4072]: I0223 12:59:57.586078 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:57.674581 master-0 kubenswrapper[4072]: I0223 12:59:57.674509 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 12:59:58.046180 master-0 kubenswrapper[4072]: W0223 12:59:58.046091 4072 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 12:59:58.046909 master-0 kubenswrapper[4072]: I0223 12:59:58.046845 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 12:59:58.141660 master-0 kubenswrapper[4072]: I0223 12:59:58.141583 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kq2rk"] Feb 23 12:59:58.142907 master-0 kubenswrapper[4072]: I0223 12:59:58.142164 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.142907 master-0 kubenswrapper[4072]: E0223 12:59:58.142307 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 12:59:58.262680 master-0 kubenswrapper[4072]: I0223 12:59:58.262598 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.262680 master-0 kubenswrapper[4072]: I0223 12:59:58.262692 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.364163 master-0 kubenswrapper[4072]: I0223 12:59:58.363638 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.364163 master-0 kubenswrapper[4072]: I0223 12:59:58.363694 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.364163 master-0 kubenswrapper[4072]: E0223 12:59:58.363807 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 12:59:58.364163 master-0 kubenswrapper[4072]: E0223 12:59:58.363874 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:58.863857314 +0000 UTC m=+86.674013926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 12:59:58.379876 master-0 kubenswrapper[4072]: I0223 12:59:58.379813 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.407640 master-0 kubenswrapper[4072]: I0223 12:59:58.407589 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmz8z" event={"ID":"c0b59f2a-7014-448c-9d3b-e38281f07dbc","Type":"ContainerStarted","Data":"f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072"} Feb 23 12:59:58.409176 master-0 kubenswrapper[4072]: I0223 12:59:58.409147 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerStarted","Data":"929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87"} Feb 23 12:59:58.868048 master-0 kubenswrapper[4072]: I0223 12:59:58.867977 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:58.868313 master-0 kubenswrapper[4072]: E0223 12:59:58.868135 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 12:59:58.868313 master-0 kubenswrapper[4072]: E0223 12:59:58.868201 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 12:59:59.868183561 +0000 UTC m=+87.678340183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 12:59:59.875991 master-0 kubenswrapper[4072]: I0223 12:59:59.875925 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 12:59:59.876981 master-0 kubenswrapper[4072]: E0223 12:59:59.876183 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 12:59:59.876981 master-0 kubenswrapper[4072]: E0223 12:59:59.876391 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:01.876349151 +0000 UTC m=+89.686505803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:00.028805 master-0 kubenswrapper[4072]: I0223 13:00:00.028742 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:00.029025 master-0 kubenswrapper[4072]: E0223 13:00:00.028972 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:00.417526 master-0 kubenswrapper[4072]: I0223 13:00:00.417353 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="a490aeb54094c79e65d9b093b1d71d57a70012d976fefb24957c763212ff701d" exitCode=0 Feb 23 13:00:00.417526 master-0 kubenswrapper[4072]: I0223 13:00:00.417427 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"a490aeb54094c79e65d9b093b1d71d57a70012d976fefb24957c763212ff701d"} Feb 23 13:00:00.449576 master-0 kubenswrapper[4072]: I0223 13:00:00.449414 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.449339646 podStartE2EDuration="2.449339646s" podCreationTimestamp="2026-02-23 12:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 12:59:58.187917694 +0000 UTC m=+85.998074376" watchObservedRunningTime="2026-02-23 13:00:00.449339646 +0000 UTC m=+88.259496328" Feb 23 13:00:01.048384 master-0 kubenswrapper[4072]: I0223 13:00:01.048312 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:00:01.895635 master-0 kubenswrapper[4072]: I0223 13:00:01.895504 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:01.895918 master-0 kubenswrapper[4072]: E0223 13:00:01.895726 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:01.895918 master-0 kubenswrapper[4072]: E0223 13:00:01.895878 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:05.895846185 +0000 UTC m=+93.706002837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:02.028925 master-0 kubenswrapper[4072]: I0223 13:00:02.028860 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:02.029091 master-0 kubenswrapper[4072]: E0223 13:00:02.029031 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:03.038891 master-0 kubenswrapper[4072]: I0223 13:00:03.038838 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:00:04.028568 master-0 kubenswrapper[4072]: I0223 13:00:04.028499 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:04.028798 master-0 kubenswrapper[4072]: E0223 13:00:04.028717 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:05.934943 master-0 kubenswrapper[4072]: I0223 13:00:05.934762 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:05.936011 master-0 kubenswrapper[4072]: E0223 13:00:05.935134 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:05.936011 master-0 kubenswrapper[4072]: E0223 13:00:05.935328 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:13.935281007 +0000 UTC m=+101.745437629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:06.029568 master-0 kubenswrapper[4072]: I0223 13:00:06.029461 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:06.029920 master-0 kubenswrapper[4072]: E0223 13:00:06.029679 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:06.436814 master-0 kubenswrapper[4072]: I0223 13:00:06.436721 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d363f0290cd5f73712e4ac4fe33436a5021a7548f84e19592e8c13df6abe2ebb" exitCode=0 Feb 23 13:00:06.436814 master-0 kubenswrapper[4072]: I0223 13:00:06.436795 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"d363f0290cd5f73712e4ac4fe33436a5021a7548f84e19592e8c13df6abe2ebb"} Feb 23 13:00:06.449717 master-0 kubenswrapper[4072]: I0223 13:00:06.449624 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=3.449601373 podStartE2EDuration="3.449601373s" podCreationTimestamp="2026-02-23 13:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:06.449461539 +0000 UTC m=+94.259618191" watchObservedRunningTime="2026-02-23 13:00:06.449601373 +0000 UTC m=+94.259758005" Feb 23 13:00:06.450156 master-0 kubenswrapper[4072]: I0223 13:00:06.450102 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=5.450093088 podStartE2EDuration="5.450093088s" podCreationTimestamp="2026-02-23 13:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:03.045902365 +0000 UTC m=+90.856059017" watchObservedRunningTime="2026-02-23 13:00:06.450093088 +0000 UTC m=+94.260249720" Feb 23 13:00:08.028453 master-0 kubenswrapper[4072]: I0223 13:00:08.028376 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:08.029056 master-0 kubenswrapper[4072]: E0223 13:00:08.028562 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:10.028611 master-0 kubenswrapper[4072]: I0223 13:00:10.028502 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:10.029525 master-0 kubenswrapper[4072]: E0223 13:00:10.028771 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:10.184424 master-0 kubenswrapper[4072]: I0223 13:00:10.184329 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h"] Feb 23 13:00:10.184862 master-0 kubenswrapper[4072]: I0223 13:00:10.184821 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.189080 master-0 kubenswrapper[4072]: I0223 13:00:10.189015 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:00:10.189296 master-0 kubenswrapper[4072]: I0223 13:00:10.189122 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:00:10.189392 master-0 kubenswrapper[4072]: I0223 13:00:10.189282 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:00:10.189530 master-0 kubenswrapper[4072]: I0223 13:00:10.189023 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:00:10.193425 master-0 kubenswrapper[4072]: I0223 13:00:10.193379 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:00:10.212284 master-0 kubenswrapper[4072]: I0223 13:00:10.212048 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jlkzw"] Feb 23 13:00:10.219977 master-0 kubenswrapper[4072]: I0223 13:00:10.219911 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.233242 master-0 kubenswrapper[4072]: I0223 13:00:10.233179 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:00:10.233824 master-0 kubenswrapper[4072]: I0223 13:00:10.233783 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:00:10.269440 master-0 kubenswrapper[4072]: I0223 13:00:10.269338 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.269440 master-0 kubenswrapper[4072]: I0223 13:00:10.269392 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.269440 master-0 kubenswrapper[4072]: I0223 13:00:10.269441 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.269604 master-0 kubenswrapper[4072]: I0223 13:00:10.269472 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.370642 master-0 kubenswrapper[4072]: I0223 13:00:10.370591 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.370790 master-0 kubenswrapper[4072]: I0223 13:00:10.370754 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.370864 master-0 kubenswrapper[4072]: I0223 13:00:10.370833 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.370923 master-0 kubenswrapper[4072]: I0223 13:00:10.370887 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.370958 master-0 kubenswrapper[4072]: I0223 13:00:10.370941 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371086 master-0 kubenswrapper[4072]: I0223 13:00:10.371049 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371161 master-0 kubenswrapper[4072]: I0223 13:00:10.371129 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371221 master-0 kubenswrapper[4072]: I0223 13:00:10.371190 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.371338 master-0 kubenswrapper[4072]: I0223 13:00:10.371301 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371403 master-0 kubenswrapper[4072]: I0223 13:00:10.371371 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371493 master-0 kubenswrapper[4072]: I0223 13:00:10.371452 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371530 master-0 kubenswrapper[4072]: I0223 13:00:10.371503 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371581 master-0 kubenswrapper[4072]: I0223 13:00:10.371550 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371621 master-0 kubenswrapper[4072]: I0223 13:00:10.371605 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371683 master-0 kubenswrapper[4072]: I0223 13:00:10.371649 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.371814 master-0 kubenswrapper[4072]: I0223 13:00:10.371778 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.371929 master-0 kubenswrapper[4072]: I0223 13:00:10.371892 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372004 master-0 kubenswrapper[4072]: I0223 13:00:10.371972 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372085 master-0 kubenswrapper[4072]: I0223 13:00:10.372056 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372135 master-0 kubenswrapper[4072]: I0223 13:00:10.372110 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372188 master-0 kubenswrapper[4072]: I0223 13:00:10.372160 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372267 master-0 kubenswrapper[4072]: I0223 13:00:10.372216 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4h89\" (UniqueName: \"kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.372352 master-0 kubenswrapper[4072]: I0223 13:00:10.372315 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.372664 master-0 kubenswrapper[4072]: I0223 13:00:10.372600 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.373854 master-0 kubenswrapper[4072]: I0223 13:00:10.373631 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.374015 master-0 kubenswrapper[4072]: I0223 13:00:10.373974 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.379827 master-0 kubenswrapper[4072]: I0223 13:00:10.379783 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.393485 master-0 kubenswrapper[4072]: I0223 13:00:10.393430 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.451760 master-0 kubenswrapper[4072]: I0223 13:00:10.451678 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmz8z" event={"ID":"c0b59f2a-7014-448c-9d3b-e38281f07dbc","Type":"ContainerStarted","Data":"6b03481713ab8c7ca73f8e189024cf0c9d4918e8429643cc7663fa25ed8f3a5d"} Feb 23 13:00:10.473117 master-0 kubenswrapper[4072]: I0223 13:00:10.473058 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473286 master-0 kubenswrapper[4072]: I0223 13:00:10.473135 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4h89\" (UniqueName: \"kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473364 master-0 kubenswrapper[4072]: I0223 13:00:10.473281 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473427 master-0 kubenswrapper[4072]: I0223 13:00:10.473400 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473495 master-0 kubenswrapper[4072]: I0223 13:00:10.473462 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473557 master-0 kubenswrapper[4072]: I0223 13:00:10.473497 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473557 master-0 kubenswrapper[4072]: I0223 13:00:10.473536 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473670 master-0 kubenswrapper[4072]: I0223 13:00:10.473580 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473670 master-0 kubenswrapper[4072]: I0223 13:00:10.473650 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473783 master-0 kubenswrapper[4072]: I0223 13:00:10.473719 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473783 master-0 kubenswrapper[4072]: I0223 13:00:10.473764 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473895 master-0 kubenswrapper[4072]: I0223 13:00:10.473797 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473895 master-0 kubenswrapper[4072]: I0223 13:00:10.473830 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.473895 master-0 kubenswrapper[4072]: I0223 13:00:10.473873 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474064 master-0 kubenswrapper[4072]: I0223 13:00:10.473906 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474064 master-0 kubenswrapper[4072]: I0223 13:00:10.473937 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474064 master-0 kubenswrapper[4072]: I0223 13:00:10.473972 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474064 master-0 kubenswrapper[4072]: I0223 13:00:10.474003 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474064 master-0 kubenswrapper[4072]: I0223 13:00:10.474048 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474376 master-0 kubenswrapper[4072]: I0223 13:00:10.474076 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474376 master-0 kubenswrapper[4072]: I0223 13:00:10.474116 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474376 master-0 kubenswrapper[4072]: I0223 13:00:10.474205 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474862 master-0 kubenswrapper[4072]: I0223 13:00:10.474803 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474983 master-0 kubenswrapper[4072]: I0223 13:00:10.474911 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.474983 master-0 kubenswrapper[4072]: I0223 13:00:10.474952 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475095 master-0 kubenswrapper[4072]: I0223 13:00:10.475004 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475671 master-0 kubenswrapper[4072]: I0223 13:00:10.475436 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475671 master-0 kubenswrapper[4072]: I0223 13:00:10.475578 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475857 master-0 kubenswrapper[4072]: I0223 13:00:10.475731 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475857 master-0 kubenswrapper[4072]: I0223 13:00:10.475799 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.475857 master-0 kubenswrapper[4072]: I0223 13:00:10.475845 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476011 master-0 kubenswrapper[4072]: I0223 13:00:10.475890 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476011 master-0 kubenswrapper[4072]: I0223 13:00:10.475934 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476321 master-0 kubenswrapper[4072]: I0223 13:00:10.476139 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476321 master-0 kubenswrapper[4072]: I0223 13:00:10.476235 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476535 master-0 kubenswrapper[4072]: I0223 13:00:10.476346 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.476813 master-0 kubenswrapper[4072]: I0223 13:00:10.476753 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.477457 master-0 kubenswrapper[4072]: I0223 13:00:10.477411 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.480359 master-0 kubenswrapper[4072]: I0223 13:00:10.480309 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.506496 master-0 kubenswrapper[4072]: I0223 13:00:10.506412 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4h89\" (UniqueName: \"kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89\") pod \"ovnkube-node-jlkzw\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.519871 master-0 kubenswrapper[4072]: I0223 13:00:10.519818 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:00:10.549314 master-0 kubenswrapper[4072]: I0223 13:00:10.549139 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:10.772917 master-0 kubenswrapper[4072]: W0223 13:00:10.772836 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c51b25_f013_4f5c_acbd_598350468192.slice/crio-ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997 WatchSource:0}: Error finding container ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997: Status 404 returned error can't find the container with id ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997 Feb 23 13:00:10.776051 master-0 kubenswrapper[4072]: I0223 13:00:10.775953 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:00:10.776179 master-0 kubenswrapper[4072]: E0223 13:00:10.776098 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:10.776179 master-0 kubenswrapper[4072]: E0223 13:00:10.776148 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:42.776131404 +0000 UTC m=+130.586288016 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:10.779994 master-0 kubenswrapper[4072]: W0223 13:00:10.779935 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod556c4233_4196_4c65_b5d1_1c3181ebe689.slice/crio-63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736 WatchSource:0}: Error finding container 63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736: Status 404 returned error can't find the container with id 63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736 Feb 23 13:00:11.456909 master-0 kubenswrapper[4072]: I0223 13:00:11.456827 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736"} Feb 23 13:00:11.460597 master-0 kubenswrapper[4072]: I0223 13:00:11.460538 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="aa169cb62afad633a7432fb996d7a5e8546ab3591767d1cbb4ee55535e914204" exitCode=0 Feb 23 13:00:11.460772 master-0 kubenswrapper[4072]: I0223 13:00:11.460719 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"aa169cb62afad633a7432fb996d7a5e8546ab3591767d1cbb4ee55535e914204"} Feb 23 13:00:11.463676 master-0 kubenswrapper[4072]: I0223 13:00:11.462998 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerStarted","Data":"f5610bfe435e8fbec12d29452fb47bbe323e82d01dd94ad65a7aad4806c2962f"} Feb 23 13:00:11.463676 master-0 kubenswrapper[4072]: I0223 13:00:11.463051 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerStarted","Data":"ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997"} Feb 23 13:00:12.029201 master-0 kubenswrapper[4072]: I0223 13:00:12.028582 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:12.029201 master-0 kubenswrapper[4072]: E0223 13:00:12.028706 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:12.757078 master-0 kubenswrapper[4072]: I0223 13:00:12.756922 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rmz8z" podStartSLOduration=2.972330056 podStartE2EDuration="15.756881297s" podCreationTimestamp="2026-02-23 12:59:57 +0000 UTC" firstStartedPulling="2026-02-23 12:59:57.518240824 +0000 UTC m=+85.328397466" lastFinishedPulling="2026-02-23 13:00:10.302792095 +0000 UTC m=+98.112948707" observedRunningTime="2026-02-23 13:00:11.513282626 +0000 UTC m=+99.323439258" watchObservedRunningTime="2026-02-23 13:00:12.756881297 +0000 UTC m=+100.567037909" Feb 23 13:00:12.758518 master-0 kubenswrapper[4072]: I0223 13:00:12.757343 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-shl6r"] Feb 23 13:00:12.758518 master-0 kubenswrapper[4072]: I0223 13:00:12.758146 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:12.758518 master-0 kubenswrapper[4072]: E0223 13:00:12.758218 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:12.894325 master-0 kubenswrapper[4072]: I0223 13:00:12.894213 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:12.995016 master-0 kubenswrapper[4072]: I0223 13:00:12.994929 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:13.354701 master-0 kubenswrapper[4072]: E0223 13:00:13.354628 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:13.354701 master-0 kubenswrapper[4072]: E0223 13:00:13.354700 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:13.355062 master-0 kubenswrapper[4072]: E0223 13:00:13.354727 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:13.355062 master-0 kubenswrapper[4072]: E0223 13:00:13.354851 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:13.854816379 +0000 UTC m=+101.664973031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:13.902340 master-0 kubenswrapper[4072]: I0223 13:00:13.902185 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:13.903023 master-0 kubenswrapper[4072]: E0223 13:00:13.902452 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:13.903023 master-0 kubenswrapper[4072]: E0223 13:00:13.902500 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:13.903023 master-0 kubenswrapper[4072]: E0223 13:00:13.902522 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:13.903023 master-0 kubenswrapper[4072]: E0223 13:00:13.902611 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:14.902585447 +0000 UTC m=+102.712742089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:14.003883 master-0 kubenswrapper[4072]: I0223 13:00:14.003802 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:14.004002 master-0 kubenswrapper[4072]: E0223 13:00:14.003942 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:14.004002 master-0 kubenswrapper[4072]: E0223 13:00:14.004001 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:00:30.003985285 +0000 UTC m=+117.814141897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:14.029611 master-0 kubenswrapper[4072]: I0223 13:00:14.029536 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:14.029824 master-0 kubenswrapper[4072]: E0223 13:00:14.029745 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:14.043750 master-0 kubenswrapper[4072]: I0223 13:00:14.043700 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:00:14.473476 master-0 kubenswrapper[4072]: I0223 13:00:14.473415 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="313dcd35e66618a3a3a009757d79bf6b3b9afb4f0c77e372c518f0c8a219ea2f" exitCode=0 Feb 23 13:00:14.473780 master-0 kubenswrapper[4072]: I0223 13:00:14.473507 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"313dcd35e66618a3a3a009757d79bf6b3b9afb4f0c77e372c518f0c8a219ea2f"} Feb 23 13:00:14.490786 master-0 kubenswrapper[4072]: I0223 13:00:14.490679 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=0.490651784 podStartE2EDuration="490.651784ms" podCreationTimestamp="2026-02-23 13:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:14.49051282 +0000 UTC m=+102.300669502" watchObservedRunningTime="2026-02-23 13:00:14.490651784 +0000 UTC m=+102.300808426" Feb 23 13:00:14.911953 master-0 kubenswrapper[4072]: I0223 13:00:14.911797 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:14.912556 master-0 kubenswrapper[4072]: E0223 13:00:14.912064 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:14.912556 master-0 kubenswrapper[4072]: E0223 13:00:14.912128 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:14.912556 master-0 kubenswrapper[4072]: E0223 13:00:14.912151 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:14.912556 master-0 kubenswrapper[4072]: E0223 13:00:14.912276 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:16.912220872 +0000 UTC m=+104.722377524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:15.028658 master-0 kubenswrapper[4072]: I0223 13:00:15.028591 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:15.028869 master-0 kubenswrapper[4072]: E0223 13:00:15.028706 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:15.338692 master-0 kubenswrapper[4072]: I0223 13:00:15.338638 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-4wvxd"] Feb 23 13:00:15.339636 master-0 kubenswrapper[4072]: I0223 13:00:15.339042 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.343572 master-0 kubenswrapper[4072]: I0223 13:00:15.343484 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:00:15.344039 master-0 kubenswrapper[4072]: I0223 13:00:15.343976 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:00:15.344228 master-0 kubenswrapper[4072]: I0223 13:00:15.344193 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:00:15.344492 master-0 kubenswrapper[4072]: I0223 13:00:15.344418 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:00:15.345546 master-0 kubenswrapper[4072]: I0223 13:00:15.345189 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:00:15.516991 master-0 kubenswrapper[4072]: I0223 13:00:15.516908 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.516991 master-0 kubenswrapper[4072]: I0223 13:00:15.516983 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.517350 master-0 kubenswrapper[4072]: I0223 13:00:15.517026 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.517350 master-0 kubenswrapper[4072]: I0223 13:00:15.517078 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.618387 master-0 kubenswrapper[4072]: I0223 13:00:15.618206 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.618387 master-0 kubenswrapper[4072]: I0223 13:00:15.618317 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.618387 master-0 kubenswrapper[4072]: I0223 13:00:15.618365 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.618721 master-0 kubenswrapper[4072]: I0223 13:00:15.618420 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.620024 master-0 kubenswrapper[4072]: I0223 13:00:15.619983 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.620487 master-0 kubenswrapper[4072]: I0223 13:00:15.620446 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.625635 master-0 kubenswrapper[4072]: I0223 13:00:15.625582 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.636504 master-0 kubenswrapper[4072]: I0223 13:00:15.636442 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.663631 master-0 kubenswrapper[4072]: I0223 13:00:15.663540 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:00:15.680370 master-0 kubenswrapper[4072]: W0223 13:00:15.680309 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d82f223_e28b_4917_8513_3ca5c6e9bff7.slice/crio-65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f WatchSource:0}: Error finding container 65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f: Status 404 returned error can't find the container with id 65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f Feb 23 13:00:16.029444 master-0 kubenswrapper[4072]: I0223 13:00:16.029307 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:16.031177 master-0 kubenswrapper[4072]: E0223 13:00:16.029504 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:16.480199 master-0 kubenswrapper[4072]: I0223 13:00:16.480125 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerStarted","Data":"65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f"} Feb 23 13:00:16.929993 master-0 kubenswrapper[4072]: I0223 13:00:16.929264 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:16.929993 master-0 kubenswrapper[4072]: E0223 13:00:16.929467 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:16.929993 master-0 kubenswrapper[4072]: E0223 13:00:16.929488 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:16.929993 master-0 kubenswrapper[4072]: E0223 13:00:16.929505 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:16.929993 master-0 kubenswrapper[4072]: E0223 13:00:16.929569 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:20.929550731 +0000 UTC m=+108.739707353 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:17.028968 master-0 kubenswrapper[4072]: I0223 13:00:17.028929 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:17.029116 master-0 kubenswrapper[4072]: E0223 13:00:17.029050 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:18.028573 master-0 kubenswrapper[4072]: I0223 13:00:18.028507 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:18.029122 master-0 kubenswrapper[4072]: E0223 13:00:18.028654 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:19.029084 master-0 kubenswrapper[4072]: I0223 13:00:19.028996 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:19.029733 master-0 kubenswrapper[4072]: E0223 13:00:19.029273 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:20.028567 master-0 kubenswrapper[4072]: I0223 13:00:20.028523 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:20.028777 master-0 kubenswrapper[4072]: E0223 13:00:20.028661 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:20.960209 master-0 kubenswrapper[4072]: I0223 13:00:20.960151 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:20.960723 master-0 kubenswrapper[4072]: E0223 13:00:20.960300 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:20.960723 master-0 kubenswrapper[4072]: E0223 13:00:20.960318 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:20.960723 master-0 kubenswrapper[4072]: E0223 13:00:20.960328 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:20.960723 master-0 kubenswrapper[4072]: E0223 13:00:20.960373 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:28.960359404 +0000 UTC m=+116.770516006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:21.028479 master-0 kubenswrapper[4072]: I0223 13:00:21.028424 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:21.028771 master-0 kubenswrapper[4072]: E0223 13:00:21.028528 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:22.028898 master-0 kubenswrapper[4072]: I0223 13:00:22.028797 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:22.030069 master-0 kubenswrapper[4072]: E0223 13:00:22.029001 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:22.496553 master-0 kubenswrapper[4072]: I0223 13:00:22.496494 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d7c78d97c5c5cb888cf7f64ec84b51fa9486a9d5d5840d99c65981486e968902" exitCode=0 Feb 23 13:00:22.496727 master-0 kubenswrapper[4072]: I0223 13:00:22.496555 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"d7c78d97c5c5cb888cf7f64ec84b51fa9486a9d5d5840d99c65981486e968902"} Feb 23 13:00:23.031596 master-0 kubenswrapper[4072]: I0223 13:00:23.031532 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:23.032470 master-0 kubenswrapper[4072]: E0223 13:00:23.032421 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:24.029397 master-0 kubenswrapper[4072]: I0223 13:00:24.029347 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:24.029578 master-0 kubenswrapper[4072]: E0223 13:00:24.029539 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:25.028920 master-0 kubenswrapper[4072]: I0223 13:00:25.028461 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:25.028920 master-0 kubenswrapper[4072]: E0223 13:00:25.028594 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:26.028563 master-0 kubenswrapper[4072]: I0223 13:00:26.028517 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:26.028757 master-0 kubenswrapper[4072]: E0223 13:00:26.028647 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:27.029782 master-0 kubenswrapper[4072]: I0223 13:00:27.029503 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:27.029782 master-0 kubenswrapper[4072]: E0223 13:00:27.029648 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:28.029613 master-0 kubenswrapper[4072]: I0223 13:00:28.029217 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:28.030000 master-0 kubenswrapper[4072]: E0223 13:00:28.029940 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:28.517719 master-0 kubenswrapper[4072]: I0223 13:00:28.517639 4072 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="2a70c0c29b6d30120d04b79d2da1e4abf09061bb5671dd422b5ce63244e7fbf8" exitCode=0 Feb 23 13:00:28.517719 master-0 kubenswrapper[4072]: I0223 13:00:28.517701 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerDied","Data":"2a70c0c29b6d30120d04b79d2da1e4abf09061bb5671dd422b5ce63244e7fbf8"} Feb 23 13:00:28.520770 master-0 kubenswrapper[4072]: I0223 13:00:28.520682 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerStarted","Data":"c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405"} Feb 23 13:00:28.530430 master-0 kubenswrapper[4072]: I0223 13:00:28.530327 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerStarted","Data":"c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e"} Feb 23 13:00:28.530430 master-0 kubenswrapper[4072]: I0223 13:00:28.530426 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerStarted","Data":"9cd234ed6b8c15b6ef57d4b02e5f80e2f747f3235364a87dcbaf6ecc39b293c8"} Feb 23 13:00:28.535081 master-0 kubenswrapper[4072]: I0223 13:00:28.534883 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" exitCode=0 Feb 23 13:00:28.535081 master-0 kubenswrapper[4072]: I0223 13:00:28.534930 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} Feb 23 13:00:28.577699 master-0 kubenswrapper[4072]: I0223 13:00:28.577587 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" podStartSLOduration=2.001701038 podStartE2EDuration="18.577553539s" podCreationTimestamp="2026-02-23 13:00:10 +0000 UTC" firstStartedPulling="2026-02-23 13:00:11.060010798 +0000 UTC m=+98.870167420" lastFinishedPulling="2026-02-23 13:00:27.635863299 +0000 UTC m=+115.446019921" observedRunningTime="2026-02-23 13:00:28.57592591 +0000 UTC m=+116.386082582" watchObservedRunningTime="2026-02-23 13:00:28.577553539 +0000 UTC m=+116.387710181" Feb 23 13:00:28.643659 master-0 kubenswrapper[4072]: I0223 13:00:28.643470 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-4wvxd" podStartSLOduration=1.652850843 podStartE2EDuration="13.643405671s" podCreationTimestamp="2026-02-23 13:00:15 +0000 UTC" firstStartedPulling="2026-02-23 13:00:15.68330777 +0000 UTC m=+103.493464422" lastFinishedPulling="2026-02-23 13:00:27.673862628 +0000 UTC m=+115.484019250" observedRunningTime="2026-02-23 13:00:28.640907116 +0000 UTC m=+116.451063798" watchObservedRunningTime="2026-02-23 13:00:28.643405671 +0000 UTC m=+116.453562313" Feb 23 13:00:29.029322 master-0 kubenswrapper[4072]: I0223 13:00:29.028721 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:29.029499 master-0 kubenswrapper[4072]: E0223 13:00:29.029386 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:29.032478 master-0 kubenswrapper[4072]: I0223 13:00:29.032236 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:29.039699 master-0 kubenswrapper[4072]: E0223 13:00:29.032486 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:29.039699 master-0 kubenswrapper[4072]: E0223 13:00:29.032552 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:29.039699 master-0 kubenswrapper[4072]: E0223 13:00:29.032573 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:29.039699 master-0 kubenswrapper[4072]: E0223 13:00:29.032675 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:00:45.032648661 +0000 UTC m=+132.842805303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:29.546045 master-0 kubenswrapper[4072]: I0223 13:00:29.545876 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" event={"ID":"65ddfc68-2612-42b6-ad11-6fe44f1cff60","Type":"ContainerStarted","Data":"4fd433ef9f34228fefe06c987c56dc0330ea2c25df40eb947005e7cb366a761a"} Feb 23 13:00:29.556130 master-0 kubenswrapper[4072]: I0223 13:00:29.556043 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} Feb 23 13:00:29.556130 master-0 kubenswrapper[4072]: I0223 13:00:29.556130 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} Feb 23 13:00:29.556475 master-0 kubenswrapper[4072]: I0223 13:00:29.556151 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} Feb 23 13:00:29.556475 master-0 kubenswrapper[4072]: I0223 13:00:29.556169 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} Feb 23 13:00:29.556475 master-0 kubenswrapper[4072]: I0223 13:00:29.556185 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:29.556475 master-0 kubenswrapper[4072]: I0223 13:00:29.556204 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:29.710890 master-0 kubenswrapper[4072]: I0223 13:00:29.710769 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f7cf9" podStartSLOduration=8.706555387 podStartE2EDuration="32.710743554s" podCreationTimestamp="2026-02-23 12:59:57 +0000 UTC" firstStartedPulling="2026-02-23 12:59:57.693198175 +0000 UTC m=+85.503354827" lastFinishedPulling="2026-02-23 13:00:21.697386382 +0000 UTC m=+109.507542994" observedRunningTime="2026-02-23 13:00:29.710419554 +0000 UTC m=+117.520576246" watchObservedRunningTime="2026-02-23 13:00:29.710743554 +0000 UTC m=+117.520900196" Feb 23 13:00:30.029418 master-0 kubenswrapper[4072]: I0223 13:00:30.029330 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:30.029676 master-0 kubenswrapper[4072]: E0223 13:00:30.029538 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:30.043756 master-0 kubenswrapper[4072]: I0223 13:00:30.043670 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:30.044575 master-0 kubenswrapper[4072]: E0223 13:00:30.043911 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:30.044575 master-0 kubenswrapper[4072]: E0223 13:00:30.043983 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:02.043957705 +0000 UTC m=+149.854114347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:00:31.028592 master-0 kubenswrapper[4072]: I0223 13:00:31.028501 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:31.028881 master-0 kubenswrapper[4072]: E0223 13:00:31.028690 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:32.028884 master-0 kubenswrapper[4072]: I0223 13:00:32.028689 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:32.029485 master-0 kubenswrapper[4072]: E0223 13:00:32.028891 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:32.572023 master-0 kubenswrapper[4072]: I0223 13:00:32.571927 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} Feb 23 13:00:32.871799 master-0 kubenswrapper[4072]: E0223 13:00:32.871297 4072 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 13:00:33.028653 master-0 kubenswrapper[4072]: I0223 13:00:33.028592 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:33.029305 master-0 kubenswrapper[4072]: E0223 13:00:33.029201 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:33.036867 master-0 kubenswrapper[4072]: E0223 13:00:33.036798 4072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:00:34.029355 master-0 kubenswrapper[4072]: I0223 13:00:34.029297 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:34.030283 master-0 kubenswrapper[4072]: E0223 13:00:34.029483 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:34.586929 master-0 kubenswrapper[4072]: I0223 13:00:34.586724 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerStarted","Data":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} Feb 23 13:00:34.587221 master-0 kubenswrapper[4072]: I0223 13:00:34.587141 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:34.587221 master-0 kubenswrapper[4072]: I0223 13:00:34.587221 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:34.626231 master-0 kubenswrapper[4072]: I0223 13:00:34.626108 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podStartSLOduration=7.824946753 podStartE2EDuration="24.626080343s" podCreationTimestamp="2026-02-23 13:00:10 +0000 UTC" firstStartedPulling="2026-02-23 13:00:10.783093883 +0000 UTC m=+98.593250505" lastFinishedPulling="2026-02-23 13:00:27.584227483 +0000 UTC m=+115.394384095" observedRunningTime="2026-02-23 13:00:34.625070183 +0000 UTC m=+122.435226875" watchObservedRunningTime="2026-02-23 13:00:34.626080343 +0000 UTC m=+122.436237005" Feb 23 13:00:34.627209 master-0 kubenswrapper[4072]: I0223 13:00:34.627155 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:34.988779 master-0 kubenswrapper[4072]: I0223 13:00:34.988668 4072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jlkzw"] Feb 23 13:00:35.028744 master-0 kubenswrapper[4072]: I0223 13:00:35.028635 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:35.028996 master-0 kubenswrapper[4072]: E0223 13:00:35.028883 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:35.592123 master-0 kubenswrapper[4072]: I0223 13:00:35.591287 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:35.628446 master-0 kubenswrapper[4072]: I0223 13:00:35.628366 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:36.030191 master-0 kubenswrapper[4072]: I0223 13:00:36.030023 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:36.030623 master-0 kubenswrapper[4072]: E0223 13:00:36.030538 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.594676 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-controller" containerID="cri-o://9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" gracePeriod=30 Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.594725 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="nbdb" containerID="cri-o://c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" gracePeriod=30 Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.594818 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" gracePeriod=30 Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.594908 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-node" containerID="cri-o://45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" gracePeriod=30 Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.595012 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="sbdb" containerID="cri-o://5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" gracePeriod=30 Feb 23 13:00:36.595105 master-0 kubenswrapper[4072]: I0223 13:00:36.594971 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="northd" containerID="cri-o://3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" gracePeriod=30 Feb 23 13:00:36.596422 master-0 kubenswrapper[4072]: I0223 13:00:36.594986 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-acl-logging" containerID="cri-o://9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" gracePeriod=30 Feb 23 13:00:36.662032 master-0 kubenswrapper[4072]: I0223 13:00:36.661692 4072 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovnkube-controller" containerID="cri-o://dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" gracePeriod=30 Feb 23 13:00:37.028987 master-0 kubenswrapper[4072]: I0223 13:00:37.028890 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:37.029172 master-0 kubenswrapper[4072]: E0223 13:00:37.029115 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:37.063449 master-0 kubenswrapper[4072]: I0223 13:00:37.063372 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovnkube-controller/0.log" Feb 23 13:00:37.066551 master-0 kubenswrapper[4072]: I0223 13:00:37.066496 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/kube-rbac-proxy-ovn-metrics/0.log" Feb 23 13:00:37.067368 master-0 kubenswrapper[4072]: I0223 13:00:37.067334 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/kube-rbac-proxy-node/0.log" Feb 23 13:00:37.068230 master-0 kubenswrapper[4072]: I0223 13:00:37.068164 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovn-acl-logging/0.log" Feb 23 13:00:37.069171 master-0 kubenswrapper[4072]: I0223 13:00:37.069119 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovn-controller/0.log" Feb 23 13:00:37.069902 master-0 kubenswrapper[4072]: I0223 13:00:37.069857 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:37.103568 master-0 kubenswrapper[4072]: I0223 13:00:37.103468 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.103810 master-0 kubenswrapper[4072]: I0223 13:00:37.103720 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.135516 master-0 kubenswrapper[4072]: I0223 13:00:37.135447 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45ncb"] Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135600 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135623 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135638 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="nbdb" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135650 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="nbdb" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135665 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-acl-logging" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135678 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-acl-logging" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135692 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="northd" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135707 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="northd" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135722 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kubecfg-setup" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135734 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kubecfg-setup" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135747 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovnkube-controller" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135760 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovnkube-controller" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135773 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-controller" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: I0223 13:00:37.135785 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-controller" Feb 23 13:00:37.135782 master-0 kubenswrapper[4072]: E0223 13:00:37.135799 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-node" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.135812 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-node" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: E0223 13:00:37.135825 4072 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="sbdb" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.135837 4072 state_mem.go:107] "Deleted CPUSet assignment" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="sbdb" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.135971 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="northd" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136017 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-node" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136114 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136134 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-acl-logging" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136149 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="nbdb" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136161 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="sbdb" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136207 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovnkube-controller" Feb 23 13:00:37.136572 master-0 kubenswrapper[4072]: I0223 13:00:37.136220 4072 memory_manager.go:354] "RemoveStaleState removing state" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerName="ovn-controller" Feb 23 13:00:37.138080 master-0 kubenswrapper[4072]: I0223 13:00:37.138040 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.204132 master-0 kubenswrapper[4072]: I0223 13:00:37.204011 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.204440 master-0 kubenswrapper[4072]: I0223 13:00:37.204140 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.204440 master-0 kubenswrapper[4072]: I0223 13:00:37.204181 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.204440 master-0 kubenswrapper[4072]: I0223 13:00:37.204230 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.204440 master-0 kubenswrapper[4072]: I0223 13:00:37.204399 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.204793 master-0 kubenswrapper[4072]: I0223 13:00:37.204705 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.204913 master-0 kubenswrapper[4072]: I0223 13:00:37.204870 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash" (OuterVolumeSpecName: "host-slash") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205157 master-0 kubenswrapper[4072]: I0223 13:00:37.205101 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205355 master-0 kubenswrapper[4072]: I0223 13:00:37.205303 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205401 master-0 kubenswrapper[4072]: I0223 13:00:37.205337 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:37.205448 master-0 kubenswrapper[4072]: I0223 13:00:37.205401 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205570 master-0 kubenswrapper[4072]: I0223 13:00:37.205534 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205618 master-0 kubenswrapper[4072]: I0223 13:00:37.205587 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205655 master-0 kubenswrapper[4072]: I0223 13:00:37.205627 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205694 master-0 kubenswrapper[4072]: I0223 13:00:37.205659 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205730 master-0 kubenswrapper[4072]: I0223 13:00:37.205665 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205730 master-0 kubenswrapper[4072]: I0223 13:00:37.205723 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205803 master-0 kubenswrapper[4072]: I0223 13:00:37.205689 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205803 master-0 kubenswrapper[4072]: I0223 13:00:37.205764 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log" (OuterVolumeSpecName: "node-log") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205873 master-0 kubenswrapper[4072]: I0223 13:00:37.205775 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205907 master-0 kubenswrapper[4072]: I0223 13:00:37.205869 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4h89\" (UniqueName: \"kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205942 master-0 kubenswrapper[4072]: I0223 13:00:37.205803 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.205985 master-0 kubenswrapper[4072]: I0223 13:00:37.205934 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.205985 master-0 kubenswrapper[4072]: I0223 13:00:37.205815 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.206050 master-0 kubenswrapper[4072]: I0223 13:00:37.205983 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.206102 master-0 kubenswrapper[4072]: I0223 13:00:37.206074 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket" (OuterVolumeSpecName: "log-socket") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.206228 master-0 kubenswrapper[4072]: I0223 13:00:37.206203 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.206350 master-0 kubenswrapper[4072]: I0223 13:00:37.206035 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.206536 master-0 kubenswrapper[4072]: I0223 13:00:37.206481 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:37.206586 master-0 kubenswrapper[4072]: I0223 13:00:37.206531 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.206634 master-0 kubenswrapper[4072]: I0223 13:00:37.206606 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.206742 master-0 kubenswrapper[4072]: I0223 13:00:37.206708 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.207633 master-0 kubenswrapper[4072]: I0223 13:00:37.207587 4072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes\") pod \"556c4233-4196-4c65-b5d1-1c3181ebe689\" (UID: \"556c4233-4196-4c65-b5d1-1c3181ebe689\") " Feb 23 13:00:37.207751 master-0 kubenswrapper[4072]: I0223 13:00:37.207712 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.207810 master-0 kubenswrapper[4072]: I0223 13:00:37.207782 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.207881 master-0 kubenswrapper[4072]: I0223 13:00:37.207844 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.207910 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.207998 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.208034 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.208084 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.208115 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208155 master-0 kubenswrapper[4072]: I0223 13:00:37.208146 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208436 master-0 kubenswrapper[4072]: I0223 13:00:37.207481 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:00:37.208436 master-0 kubenswrapper[4072]: I0223 13:00:37.208286 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.208436 master-0 kubenswrapper[4072]: I0223 13:00:37.208342 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208436 master-0 kubenswrapper[4072]: I0223 13:00:37.208414 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208675 master-0 kubenswrapper[4072]: I0223 13:00:37.208483 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208675 master-0 kubenswrapper[4072]: I0223 13:00:37.208569 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208742 master-0 kubenswrapper[4072]: I0223 13:00:37.208684 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208777 master-0 kubenswrapper[4072]: I0223 13:00:37.208744 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208877 master-0 kubenswrapper[4072]: I0223 13:00:37.208800 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.208948 master-0 kubenswrapper[4072]: I0223 13:00:37.208921 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.209103 master-0 kubenswrapper[4072]: I0223 13:00:37.209006 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.209103 master-0 kubenswrapper[4072]: I0223 13:00:37.209087 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.209339 master-0 kubenswrapper[4072]: I0223 13:00:37.209293 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.209465 master-0 kubenswrapper[4072]: I0223 13:00:37.209440 4072 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209509 master-0 kubenswrapper[4072]: I0223 13:00:37.209472 4072 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209509 master-0 kubenswrapper[4072]: I0223 13:00:37.209500 4072 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209581 master-0 kubenswrapper[4072]: I0223 13:00:37.209528 4072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209581 master-0 kubenswrapper[4072]: I0223 13:00:37.209556 4072 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-node-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209650 master-0 kubenswrapper[4072]: I0223 13:00:37.209578 4072 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209650 master-0 kubenswrapper[4072]: I0223 13:00:37.209604 4072 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209650 master-0 kubenswrapper[4072]: I0223 13:00:37.209623 4072 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209650 master-0 kubenswrapper[4072]: I0223 13:00:37.209640 4072 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209660 4072 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209680 4072 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209696 4072 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209713 4072 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209733 4072 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209751 4072 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.209776 master-0 kubenswrapper[4072]: I0223 13:00:37.209769 4072 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.210009 master-0 kubenswrapper[4072]: I0223 13:00:37.209786 4072 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556c4233-4196-4c65-b5d1-1c3181ebe689-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.213143 master-0 kubenswrapper[4072]: I0223 13:00:37.213075 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:00:37.213509 master-0 kubenswrapper[4072]: I0223 13:00:37.213439 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89" (OuterVolumeSpecName: "kube-api-access-r4h89") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "kube-api-access-r4h89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:00:37.217582 master-0 kubenswrapper[4072]: I0223 13:00:37.217549 4072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "556c4233-4196-4c65-b5d1-1c3181ebe689" (UID: "556c4233-4196-4c65-b5d1-1c3181ebe689"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:00:37.310765 master-0 kubenswrapper[4072]: I0223 13:00:37.310642 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311006 master-0 kubenswrapper[4072]: I0223 13:00:37.310817 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311006 master-0 kubenswrapper[4072]: I0223 13:00:37.310885 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311006 master-0 kubenswrapper[4072]: I0223 13:00:37.310943 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311168 master-0 kubenswrapper[4072]: I0223 13:00:37.311042 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311168 master-0 kubenswrapper[4072]: I0223 13:00:37.311110 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311168 master-0 kubenswrapper[4072]: I0223 13:00:37.311159 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311390 master-0 kubenswrapper[4072]: I0223 13:00:37.311201 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311390 master-0 kubenswrapper[4072]: I0223 13:00:37.311236 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311390 master-0 kubenswrapper[4072]: I0223 13:00:37.311359 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311545 master-0 kubenswrapper[4072]: I0223 13:00:37.311468 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311668 master-0 kubenswrapper[4072]: I0223 13:00:37.311619 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311731 master-0 kubenswrapper[4072]: I0223 13:00:37.311686 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311794 master-0 kubenswrapper[4072]: I0223 13:00:37.311744 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311794 master-0 kubenswrapper[4072]: I0223 13:00:37.311768 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.311899 master-0 kubenswrapper[4072]: I0223 13:00:37.311864 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312020 master-0 kubenswrapper[4072]: I0223 13:00:37.311989 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312315 master-0 kubenswrapper[4072]: I0223 13:00:37.312277 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312459 master-0 kubenswrapper[4072]: I0223 13:00:37.312328 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312459 master-0 kubenswrapper[4072]: I0223 13:00:37.312358 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312459 master-0 kubenswrapper[4072]: I0223 13:00:37.312439 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312640 master-0 kubenswrapper[4072]: I0223 13:00:37.312472 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312640 master-0 kubenswrapper[4072]: I0223 13:00:37.312506 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312640 master-0 kubenswrapper[4072]: I0223 13:00:37.312541 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312640 master-0 kubenswrapper[4072]: I0223 13:00:37.312574 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312640 master-0 kubenswrapper[4072]: I0223 13:00:37.312605 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312639 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312713 4072 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556c4233-4196-4c65-b5d1-1c3181ebe689-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312748 4072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4h89\" (UniqueName: \"kubernetes.io/projected/556c4233-4196-4c65-b5d1-1c3181ebe689-kube-api-access-r4h89\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312776 4072 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556c4233-4196-4c65-b5d1-1c3181ebe689-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312771 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.312901 master-0 kubenswrapper[4072]: I0223 13:00:37.312897 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313219 master-0 kubenswrapper[4072]: I0223 13:00:37.312982 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313219 master-0 kubenswrapper[4072]: I0223 13:00:37.313051 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313219 master-0 kubenswrapper[4072]: I0223 13:00:37.313112 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313415 master-0 kubenswrapper[4072]: I0223 13:00:37.313283 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313848 master-0 kubenswrapper[4072]: I0223 13:00:37.313467 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313848 master-0 kubenswrapper[4072]: I0223 13:00:37.313595 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313848 master-0 kubenswrapper[4072]: I0223 13:00:37.313656 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.313848 master-0 kubenswrapper[4072]: I0223 13:00:37.313713 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.314337 master-0 kubenswrapper[4072]: I0223 13:00:37.314305 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.316288 master-0 kubenswrapper[4072]: I0223 13:00:37.316206 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.343585 master-0 kubenswrapper[4072]: I0223 13:00:37.343480 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.461775 master-0 kubenswrapper[4072]: I0223 13:00:37.461548 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:37.481509 master-0 kubenswrapper[4072]: W0223 13:00:37.481443 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc2e8a2_ea4d_4d8d_9bdf_5127a8d717c2.slice/crio-7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e WatchSource:0}: Error finding container 7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e: Status 404 returned error can't find the container with id 7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e Feb 23 13:00:37.600606 master-0 kubenswrapper[4072]: I0223 13:00:37.600558 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovnkube-controller/0.log" Feb 23 13:00:37.603099 master-0 kubenswrapper[4072]: I0223 13:00:37.603074 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/kube-rbac-proxy-ovn-metrics/0.log" Feb 23 13:00:37.604273 master-0 kubenswrapper[4072]: I0223 13:00:37.604203 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/kube-rbac-proxy-node/0.log" Feb 23 13:00:37.605074 master-0 kubenswrapper[4072]: I0223 13:00:37.605055 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovn-acl-logging/0.log" Feb 23 13:00:37.605940 master-0 kubenswrapper[4072]: I0223 13:00:37.605924 4072 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jlkzw_556c4233-4196-4c65-b5d1-1c3181ebe689/ovn-controller/0.log" Feb 23 13:00:37.606748 master-0 kubenswrapper[4072]: I0223 13:00:37.606700 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" exitCode=2 Feb 23 13:00:37.606837 master-0 kubenswrapper[4072]: I0223 13:00:37.606756 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" exitCode=0 Feb 23 13:00:37.606837 master-0 kubenswrapper[4072]: I0223 13:00:37.606811 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" exitCode=0 Feb 23 13:00:37.606946 master-0 kubenswrapper[4072]: I0223 13:00:37.606840 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" exitCode=0 Feb 23 13:00:37.606946 master-0 kubenswrapper[4072]: I0223 13:00:37.606860 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" exitCode=143 Feb 23 13:00:37.606946 master-0 kubenswrapper[4072]: I0223 13:00:37.606879 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" exitCode=143 Feb 23 13:00:37.606946 master-0 kubenswrapper[4072]: I0223 13:00:37.606899 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" exitCode=143 Feb 23 13:00:37.606946 master-0 kubenswrapper[4072]: I0223 13:00:37.606934 4072 generic.go:334] "Generic (PLEG): container finished" podID="556c4233-4196-4c65-b5d1-1c3181ebe689" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" exitCode=143 Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.606882 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607013 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607051 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607072 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607090 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607111 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} Feb 23 13:00:37.607169 master-0 kubenswrapper[4072]: I0223 13:00:37.607139 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607128 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607292 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607305 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607320 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607340 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607352 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607362 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607371 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607380 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607389 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607399 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607408 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607417 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607431 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607450 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607461 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607472 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607481 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607490 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607499 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607509 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607518 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607528 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} Feb 23 13:00:37.607541 master-0 kubenswrapper[4072]: I0223 13:00:37.607525 4072 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607541 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jlkzw" event={"ID":"556c4233-4196-4c65-b5d1-1c3181ebe689","Type":"ContainerDied","Data":"63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607718 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607734 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607747 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607757 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607766 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607775 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607784 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607794 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.607802 4072 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.608969 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"860a9e244b04d91c3a33beb656c339e8751b53849a1636cd6eb8994e31e07960"} Feb 23 13:00:37.609178 master-0 kubenswrapper[4072]: I0223 13:00:37.609039 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e"} Feb 23 13:00:37.665108 master-0 kubenswrapper[4072]: I0223 13:00:37.665052 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.681821 master-0 kubenswrapper[4072]: I0223 13:00:37.681760 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.684695 master-0 kubenswrapper[4072]: I0223 13:00:37.684640 4072 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jlkzw"] Feb 23 13:00:37.691860 master-0 kubenswrapper[4072]: I0223 13:00:37.691795 4072 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jlkzw"] Feb 23 13:00:37.707900 master-0 kubenswrapper[4072]: I0223 13:00:37.707747 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.738545 master-0 kubenswrapper[4072]: I0223 13:00:37.738418 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.753217 master-0 kubenswrapper[4072]: I0223 13:00:37.753165 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.766890 master-0 kubenswrapper[4072]: I0223 13:00:37.766827 4072 scope.go:117] "RemoveContainer" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.780617 master-0 kubenswrapper[4072]: I0223 13:00:37.780563 4072 scope.go:117] "RemoveContainer" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.794521 master-0 kubenswrapper[4072]: I0223 13:00:37.794465 4072 scope.go:117] "RemoveContainer" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.807728 master-0 kubenswrapper[4072]: I0223 13:00:37.807674 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.808290 master-0 kubenswrapper[4072]: E0223 13:00:37.808202 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.808378 master-0 kubenswrapper[4072]: I0223 13:00:37.808303 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} err="failed to get container status \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" Feb 23 13:00:37.808378 master-0 kubenswrapper[4072]: I0223 13:00:37.808344 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.808776 master-0 kubenswrapper[4072]: E0223 13:00:37.808707 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.808852 master-0 kubenswrapper[4072]: I0223 13:00:37.808775 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} err="failed to get container status \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" Feb 23 13:00:37.808852 master-0 kubenswrapper[4072]: I0223 13:00:37.808815 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.809416 master-0 kubenswrapper[4072]: E0223 13:00:37.809349 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.809508 master-0 kubenswrapper[4072]: I0223 13:00:37.809398 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} err="failed to get container status \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" Feb 23 13:00:37.809508 master-0 kubenswrapper[4072]: I0223 13:00:37.809436 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.809820 master-0 kubenswrapper[4072]: E0223 13:00:37.809761 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.809897 master-0 kubenswrapper[4072]: I0223 13:00:37.809808 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} err="failed to get container status \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" Feb 23 13:00:37.809897 master-0 kubenswrapper[4072]: I0223 13:00:37.809836 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.810559 master-0 kubenswrapper[4072]: E0223 13:00:37.810505 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.810648 master-0 kubenswrapper[4072]: I0223 13:00:37.810557 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} err="failed to get container status \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" Feb 23 13:00:37.810648 master-0 kubenswrapper[4072]: I0223 13:00:37.810616 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.811125 master-0 kubenswrapper[4072]: E0223 13:00:37.811069 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.811125 master-0 kubenswrapper[4072]: I0223 13:00:37.811115 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} err="failed to get container status \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" Feb 23 13:00:37.811310 master-0 kubenswrapper[4072]: I0223 13:00:37.811140 4072 scope.go:117] "RemoveContainer" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.811714 master-0 kubenswrapper[4072]: E0223 13:00:37.811646 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": container with ID starting with 9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e not found: ID does not exist" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.811714 master-0 kubenswrapper[4072]: I0223 13:00:37.811700 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} err="failed to get container status \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": rpc error: code = NotFound desc = could not find container \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": container with ID starting with 9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e not found: ID does not exist" Feb 23 13:00:37.811849 master-0 kubenswrapper[4072]: I0223 13:00:37.811719 4072 scope.go:117] "RemoveContainer" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.812216 master-0 kubenswrapper[4072]: E0223 13:00:37.812165 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": container with ID starting with 9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e not found: ID does not exist" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.812329 master-0 kubenswrapper[4072]: I0223 13:00:37.812209 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} err="failed to get container status \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": rpc error: code = NotFound desc = could not find container \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": container with ID starting with 9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e not found: ID does not exist" Feb 23 13:00:37.812329 master-0 kubenswrapper[4072]: I0223 13:00:37.812238 4072 scope.go:117] "RemoveContainer" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.812860 master-0 kubenswrapper[4072]: E0223 13:00:37.812793 4072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": container with ID starting with 01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39 not found: ID does not exist" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.812860 master-0 kubenswrapper[4072]: I0223 13:00:37.812846 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} err="failed to get container status \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": rpc error: code = NotFound desc = could not find container \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": container with ID starting with 01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39 not found: ID does not exist" Feb 23 13:00:37.812998 master-0 kubenswrapper[4072]: I0223 13:00:37.812865 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.813401 master-0 kubenswrapper[4072]: I0223 13:00:37.813339 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} err="failed to get container status \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" Feb 23 13:00:37.813401 master-0 kubenswrapper[4072]: I0223 13:00:37.813391 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.813868 master-0 kubenswrapper[4072]: I0223 13:00:37.813811 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} err="failed to get container status \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" Feb 23 13:00:37.813868 master-0 kubenswrapper[4072]: I0223 13:00:37.813848 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.814397 master-0 kubenswrapper[4072]: I0223 13:00:37.814334 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} err="failed to get container status \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" Feb 23 13:00:37.814397 master-0 kubenswrapper[4072]: I0223 13:00:37.814381 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.814855 master-0 kubenswrapper[4072]: I0223 13:00:37.814789 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} err="failed to get container status \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" Feb 23 13:00:37.814855 master-0 kubenswrapper[4072]: I0223 13:00:37.814836 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.815353 master-0 kubenswrapper[4072]: I0223 13:00:37.815277 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} err="failed to get container status \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" Feb 23 13:00:37.815353 master-0 kubenswrapper[4072]: I0223 13:00:37.815341 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.815743 master-0 kubenswrapper[4072]: I0223 13:00:37.815700 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} err="failed to get container status \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" Feb 23 13:00:37.815743 master-0 kubenswrapper[4072]: I0223 13:00:37.815725 4072 scope.go:117] "RemoveContainer" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.816130 master-0 kubenswrapper[4072]: I0223 13:00:37.816072 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} err="failed to get container status \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": rpc error: code = NotFound desc = could not find container \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": container with ID starting with 9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e not found: ID does not exist" Feb 23 13:00:37.816130 master-0 kubenswrapper[4072]: I0223 13:00:37.816116 4072 scope.go:117] "RemoveContainer" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.816578 master-0 kubenswrapper[4072]: I0223 13:00:37.816511 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} err="failed to get container status \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": rpc error: code = NotFound desc = could not find container \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": container with ID starting with 9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e not found: ID does not exist" Feb 23 13:00:37.816578 master-0 kubenswrapper[4072]: I0223 13:00:37.816564 4072 scope.go:117] "RemoveContainer" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.817075 master-0 kubenswrapper[4072]: I0223 13:00:37.817013 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} err="failed to get container status \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": rpc error: code = NotFound desc = could not find container \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": container with ID starting with 01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39 not found: ID does not exist" Feb 23 13:00:37.817075 master-0 kubenswrapper[4072]: I0223 13:00:37.817065 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.817535 master-0 kubenswrapper[4072]: I0223 13:00:37.817469 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} err="failed to get container status \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" Feb 23 13:00:37.817535 master-0 kubenswrapper[4072]: I0223 13:00:37.817523 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.818169 master-0 kubenswrapper[4072]: I0223 13:00:37.818111 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} err="failed to get container status \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" Feb 23 13:00:37.818169 master-0 kubenswrapper[4072]: I0223 13:00:37.818155 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.818665 master-0 kubenswrapper[4072]: I0223 13:00:37.818604 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} err="failed to get container status \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" Feb 23 13:00:37.818665 master-0 kubenswrapper[4072]: I0223 13:00:37.818651 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.819291 master-0 kubenswrapper[4072]: I0223 13:00:37.819211 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} err="failed to get container status \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" Feb 23 13:00:37.819291 master-0 kubenswrapper[4072]: I0223 13:00:37.819287 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.819745 master-0 kubenswrapper[4072]: I0223 13:00:37.819678 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} err="failed to get container status \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" Feb 23 13:00:37.819745 master-0 kubenswrapper[4072]: I0223 13:00:37.819736 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.820195 master-0 kubenswrapper[4072]: I0223 13:00:37.820131 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} err="failed to get container status \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" Feb 23 13:00:37.820195 master-0 kubenswrapper[4072]: I0223 13:00:37.820182 4072 scope.go:117] "RemoveContainer" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.820610 master-0 kubenswrapper[4072]: I0223 13:00:37.820566 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} err="failed to get container status \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": rpc error: code = NotFound desc = could not find container \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": container with ID starting with 9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e not found: ID does not exist" Feb 23 13:00:37.820698 master-0 kubenswrapper[4072]: I0223 13:00:37.820593 4072 scope.go:117] "RemoveContainer" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.821045 master-0 kubenswrapper[4072]: I0223 13:00:37.820979 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} err="failed to get container status \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": rpc error: code = NotFound desc = could not find container \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": container with ID starting with 9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e not found: ID does not exist" Feb 23 13:00:37.821045 master-0 kubenswrapper[4072]: I0223 13:00:37.821032 4072 scope.go:117] "RemoveContainer" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.821468 master-0 kubenswrapper[4072]: I0223 13:00:37.821425 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} err="failed to get container status \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": rpc error: code = NotFound desc = could not find container \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": container with ID starting with 01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39 not found: ID does not exist" Feb 23 13:00:37.821468 master-0 kubenswrapper[4072]: I0223 13:00:37.821452 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.821892 master-0 kubenswrapper[4072]: I0223 13:00:37.821827 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} err="failed to get container status \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" Feb 23 13:00:37.821892 master-0 kubenswrapper[4072]: I0223 13:00:37.821881 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.822289 master-0 kubenswrapper[4072]: I0223 13:00:37.822214 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} err="failed to get container status \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" Feb 23 13:00:37.822289 master-0 kubenswrapper[4072]: I0223 13:00:37.822276 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.822754 master-0 kubenswrapper[4072]: I0223 13:00:37.822689 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} err="failed to get container status \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" Feb 23 13:00:37.822754 master-0 kubenswrapper[4072]: I0223 13:00:37.822744 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.823238 master-0 kubenswrapper[4072]: I0223 13:00:37.823170 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} err="failed to get container status \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" Feb 23 13:00:37.823238 master-0 kubenswrapper[4072]: I0223 13:00:37.823227 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.823706 master-0 kubenswrapper[4072]: I0223 13:00:37.823651 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} err="failed to get container status \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" Feb 23 13:00:37.823706 master-0 kubenswrapper[4072]: I0223 13:00:37.823697 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.824121 master-0 kubenswrapper[4072]: I0223 13:00:37.824061 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} err="failed to get container status \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" Feb 23 13:00:37.824121 master-0 kubenswrapper[4072]: I0223 13:00:37.824102 4072 scope.go:117] "RemoveContainer" containerID="9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e" Feb 23 13:00:37.824511 master-0 kubenswrapper[4072]: I0223 13:00:37.824466 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e"} err="failed to get container status \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": rpc error: code = NotFound desc = could not find container \"9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e\": container with ID starting with 9e76f148bcb7cfbb2c0c1dea3414460a7357cd41152e9e21b5237796a9bffa1e not found: ID does not exist" Feb 23 13:00:37.824617 master-0 kubenswrapper[4072]: I0223 13:00:37.824514 4072 scope.go:117] "RemoveContainer" containerID="9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e" Feb 23 13:00:37.824965 master-0 kubenswrapper[4072]: I0223 13:00:37.824916 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e"} err="failed to get container status \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": rpc error: code = NotFound desc = could not find container \"9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e\": container with ID starting with 9e9278d7363f543972ba0a2d2416908d23a784c9a811bdf96da29b181e43984e not found: ID does not exist" Feb 23 13:00:37.824965 master-0 kubenswrapper[4072]: I0223 13:00:37.824961 4072 scope.go:117] "RemoveContainer" containerID="01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39" Feb 23 13:00:37.825373 master-0 kubenswrapper[4072]: I0223 13:00:37.825307 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39"} err="failed to get container status \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": rpc error: code = NotFound desc = could not find container \"01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39\": container with ID starting with 01d30b5ea7707fe4000a0c04e63f2df439aacbf35da3aae1ae9297a881831b39 not found: ID does not exist" Feb 23 13:00:37.825373 master-0 kubenswrapper[4072]: I0223 13:00:37.825362 4072 scope.go:117] "RemoveContainer" containerID="dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23" Feb 23 13:00:37.825783 master-0 kubenswrapper[4072]: I0223 13:00:37.825740 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23"} err="failed to get container status \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": rpc error: code = NotFound desc = could not find container \"dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23\": container with ID starting with dcb86877421f28ab9ea817a6339da52f11a6302fbcd7f96cdad60e1c15cf5c23 not found: ID does not exist" Feb 23 13:00:37.825860 master-0 kubenswrapper[4072]: I0223 13:00:37.825792 4072 scope.go:117] "RemoveContainer" containerID="5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9" Feb 23 13:00:37.826187 master-0 kubenswrapper[4072]: I0223 13:00:37.826126 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9"} err="failed to get container status \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": rpc error: code = NotFound desc = could not find container \"5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9\": container with ID starting with 5bfd125451d2d49348e8b2a37f61848c9e5e2bbae7c9517fe89dced858d7bce9 not found: ID does not exist" Feb 23 13:00:37.826187 master-0 kubenswrapper[4072]: I0223 13:00:37.826174 4072 scope.go:117] "RemoveContainer" containerID="c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c" Feb 23 13:00:37.826665 master-0 kubenswrapper[4072]: I0223 13:00:37.826621 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c"} err="failed to get container status \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": rpc error: code = NotFound desc = could not find container \"c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c\": container with ID starting with c85bb60e9bdaf95978da93e8210c516ca1456c1125c1420a2bde248a03b98d1c not found: ID does not exist" Feb 23 13:00:37.826746 master-0 kubenswrapper[4072]: I0223 13:00:37.826667 4072 scope.go:117] "RemoveContainer" containerID="3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0" Feb 23 13:00:37.827113 master-0 kubenswrapper[4072]: I0223 13:00:37.827053 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0"} err="failed to get container status \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": rpc error: code = NotFound desc = could not find container \"3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0\": container with ID starting with 3d41515cb8962c9f83577909f59f83d692ea2c2c982fc03b7ef0d63e6a2ca3e0 not found: ID does not exist" Feb 23 13:00:37.827113 master-0 kubenswrapper[4072]: I0223 13:00:37.827099 4072 scope.go:117] "RemoveContainer" containerID="c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a" Feb 23 13:00:37.827489 master-0 kubenswrapper[4072]: I0223 13:00:37.827436 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a"} err="failed to get container status \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": rpc error: code = NotFound desc = could not find container \"c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a\": container with ID starting with c21fd04df4e3ba23513a4a010487ca913d97083d3c3da7627404d0f94ebbed7a not found: ID does not exist" Feb 23 13:00:37.827489 master-0 kubenswrapper[4072]: I0223 13:00:37.827482 4072 scope.go:117] "RemoveContainer" containerID="45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d" Feb 23 13:00:37.828059 master-0 kubenswrapper[4072]: I0223 13:00:37.828007 4072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d"} err="failed to get container status \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": rpc error: code = NotFound desc = could not find container \"45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d\": container with ID starting with 45eb72472fd1069100411412ec1667211997ef253d3c9087e83a6020ab2e0f6d not found: ID does not exist" Feb 23 13:00:38.029374 master-0 kubenswrapper[4072]: I0223 13:00:38.029200 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:38.029630 master-0 kubenswrapper[4072]: E0223 13:00:38.029414 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:38.038323 master-0 kubenswrapper[4072]: E0223 13:00:38.038214 4072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:00:38.616130 master-0 kubenswrapper[4072]: I0223 13:00:38.616043 4072 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" containerID="860a9e244b04d91c3a33beb656c339e8751b53849a1636cd6eb8994e31e07960" exitCode=0 Feb 23 13:00:38.616130 master-0 kubenswrapper[4072]: I0223 13:00:38.616100 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerDied","Data":"860a9e244b04d91c3a33beb656c339e8751b53849a1636cd6eb8994e31e07960"} Feb 23 13:00:39.029679 master-0 kubenswrapper[4072]: I0223 13:00:39.029206 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:39.029969 master-0 kubenswrapper[4072]: E0223 13:00:39.029912 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:39.036365 master-0 kubenswrapper[4072]: I0223 13:00:39.036303 4072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="556c4233-4196-4c65-b5d1-1c3181ebe689" path="/var/lib/kubelet/pods/556c4233-4196-4c65-b5d1-1c3181ebe689/volumes" Feb 23 13:00:39.636339 master-0 kubenswrapper[4072]: I0223 13:00:39.635931 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"1618f34520978a82e99ed74a9dadd9adaa55b3c53fc1ca2aa43ce46367f10274"} Feb 23 13:00:39.636339 master-0 kubenswrapper[4072]: I0223 13:00:39.635996 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"f5e541aa6a85d4c3be5bcd9cae6e3587e45c0df6192c72fa50e3243788ed2c0d"} Feb 23 13:00:39.637506 master-0 kubenswrapper[4072]: I0223 13:00:39.636019 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"24cb1036361062db1031dce10fc5029ef00ce1024000ee16c3357bb021b43615"} Feb 23 13:00:39.637506 master-0 kubenswrapper[4072]: I0223 13:00:39.636465 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"dbf34be7f1cb4a35c03a166744dbaaed6e052b9138ad218f52a31d010c96ebe4"} Feb 23 13:00:39.637506 master-0 kubenswrapper[4072]: I0223 13:00:39.636489 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"0c278e581ec4476bd3ff87043b9ac65e128cae7479ac52a0093db22c2fd9de77"} Feb 23 13:00:39.637506 master-0 kubenswrapper[4072]: I0223 13:00:39.636509 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"cc2f74694277fc63f488abf628ba124f3262a0ed8e110f8dcb70d1aa5be37478"} Feb 23 13:00:40.029521 master-0 kubenswrapper[4072]: I0223 13:00:40.029410 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:40.029803 master-0 kubenswrapper[4072]: E0223 13:00:40.029600 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:41.028880 master-0 kubenswrapper[4072]: I0223 13:00:41.028746 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:41.029915 master-0 kubenswrapper[4072]: E0223 13:00:41.028987 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:42.029462 master-0 kubenswrapper[4072]: I0223 13:00:42.029384 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:42.030372 master-0 kubenswrapper[4072]: E0223 13:00:42.029600 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:42.654872 master-0 kubenswrapper[4072]: I0223 13:00:42.654746 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"d9b54249afb5237a79519a3a2c68f1b97007e7d06a6527997e9ada4d58893e66"} Feb 23 13:00:42.866697 master-0 kubenswrapper[4072]: I0223 13:00:42.866585 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:00:42.866964 master-0 kubenswrapper[4072]: E0223 13:00:42.866807 4072 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:42.866964 master-0 kubenswrapper[4072]: E0223 13:00:42.866930 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:46.866898968 +0000 UTC m=+194.677055611 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:00:43.030103 master-0 kubenswrapper[4072]: I0223 13:00:43.030036 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:43.030902 master-0 kubenswrapper[4072]: E0223 13:00:43.030737 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:43.043511 master-0 kubenswrapper[4072]: E0223 13:00:43.043448 4072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:00:43.047207 master-0 kubenswrapper[4072]: I0223 13:00:43.047156 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 23 13:00:44.029172 master-0 kubenswrapper[4072]: I0223 13:00:44.029104 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:44.029362 master-0 kubenswrapper[4072]: E0223 13:00:44.029333 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:44.670383 master-0 kubenswrapper[4072]: I0223 13:00:44.670239 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" event={"ID":"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2","Type":"ContainerStarted","Data":"b4f48137776f175bb1f35e773b46d59b3b2fc491e834c8866048a19964a0b9dd"} Feb 23 13:00:44.671172 master-0 kubenswrapper[4072]: I0223 13:00:44.670840 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:44.671172 master-0 kubenswrapper[4072]: I0223 13:00:44.670992 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:44.671172 master-0 kubenswrapper[4072]: I0223 13:00:44.671039 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:44.706384 master-0 kubenswrapper[4072]: I0223 13:00:44.706278 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" podStartSLOduration=7.7062121569999995 podStartE2EDuration="7.706212157s" podCreationTimestamp="2026-02-23 13:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:44.705843916 +0000 UTC m=+132.516000568" watchObservedRunningTime="2026-02-23 13:00:44.706212157 +0000 UTC m=+132.516368829" Feb 23 13:00:44.706695 master-0 kubenswrapper[4072]: I0223 13:00:44.706652 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:44.710019 master-0 kubenswrapper[4072]: I0223 13:00:44.709965 4072 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:00:44.768187 master-0 kubenswrapper[4072]: I0223 13:00:44.768105 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.76808067 podStartE2EDuration="1.76808067s" podCreationTimestamp="2026-02-23 13:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:00:44.731849255 +0000 UTC m=+132.542005897" watchObservedRunningTime="2026-02-23 13:00:44.76808067 +0000 UTC m=+132.578237322" Feb 23 13:00:44.904190 master-0 kubenswrapper[4072]: I0223 13:00:44.904085 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-shl6r"] Feb 23 13:00:44.904702 master-0 kubenswrapper[4072]: I0223 13:00:44.904301 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:44.904702 master-0 kubenswrapper[4072]: E0223 13:00:44.904455 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:44.905816 master-0 kubenswrapper[4072]: I0223 13:00:44.905754 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2rk"] Feb 23 13:00:44.906030 master-0 kubenswrapper[4072]: I0223 13:00:44.905908 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:44.906030 master-0 kubenswrapper[4072]: E0223 13:00:44.906052 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:45.087940 master-0 kubenswrapper[4072]: I0223 13:00:45.087849 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:45.088369 master-0 kubenswrapper[4072]: E0223 13:00:45.088192 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:00:45.088369 master-0 kubenswrapper[4072]: E0223 13:00:45.088278 4072 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:00:45.088369 master-0 kubenswrapper[4072]: E0223 13:00:45.088308 4072 projected.go:194] Error preparing data for projected volume kube-api-access-q2cgc for pod openshift-network-diagnostics/network-check-target-shl6r: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:45.088552 master-0 kubenswrapper[4072]: E0223 13:00:45.088407 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc podName:d0c7587b-eea6-4d98-b39d-3a0feba4035d nodeName:}" failed. No retries permitted until 2026-02-23 13:01:17.088377905 +0000 UTC m=+164.898534547 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-q2cgc" (UniqueName: "kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc") pod "network-check-target-shl6r" (UID: "d0c7587b-eea6-4d98-b39d-3a0feba4035d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:00:47.029280 master-0 kubenswrapper[4072]: I0223 13:00:47.029182 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:47.031750 master-0 kubenswrapper[4072]: E0223 13:00:47.029408 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:47.031750 master-0 kubenswrapper[4072]: I0223 13:00:47.029483 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:47.031750 master-0 kubenswrapper[4072]: E0223 13:00:47.029662 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:48.044986 master-0 kubenswrapper[4072]: E0223 13:00:48.044915 4072 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:00:49.029646 master-0 kubenswrapper[4072]: I0223 13:00:49.029161 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:49.029915 master-0 kubenswrapper[4072]: E0223 13:00:49.029734 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:49.029915 master-0 kubenswrapper[4072]: I0223 13:00:49.029439 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:49.030126 master-0 kubenswrapper[4072]: E0223 13:00:49.029988 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:51.028720 master-0 kubenswrapper[4072]: I0223 13:00:51.028617 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:51.029854 master-0 kubenswrapper[4072]: I0223 13:00:51.028638 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:51.029854 master-0 kubenswrapper[4072]: E0223 13:00:51.028793 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:51.029854 master-0 kubenswrapper[4072]: E0223 13:00:51.028900 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:53.028739 master-0 kubenswrapper[4072]: I0223 13:00:53.028617 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:53.030568 master-0 kubenswrapper[4072]: E0223 13:00:53.030472 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-shl6r" podUID="d0c7587b-eea6-4d98-b39d-3a0feba4035d" Feb 23 13:00:53.030568 master-0 kubenswrapper[4072]: I0223 13:00:53.030549 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:53.030727 master-0 kubenswrapper[4072]: E0223 13:00:53.030689 4072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2rk" podUID="e7fbab55-8405-44f4-ae2a-412c115ce411" Feb 23 13:00:55.029135 master-0 kubenswrapper[4072]: I0223 13:00:55.029018 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:00:55.030334 master-0 kubenswrapper[4072]: I0223 13:00:55.029530 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:00:55.032095 master-0 kubenswrapper[4072]: I0223 13:00:55.032037 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:00:55.032095 master-0 kubenswrapper[4072]: I0223 13:00:55.032064 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:00:55.032618 master-0 kubenswrapper[4072]: I0223 13:00:55.032570 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:00:59.630008 master-0 kubenswrapper[4072]: I0223 13:00:59.629591 4072 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 23 13:00:59.671773 master-0 kubenswrapper[4072]: I0223 13:00:59.671718 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp"] Feb 23 13:00:59.672180 master-0 kubenswrapper[4072]: I0223 13:00:59.672146 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.678063 master-0 kubenswrapper[4072]: I0223 13:00:59.677939 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-28zcz"] Feb 23 13:00:59.678660 master-0 kubenswrapper[4072]: I0223 13:00:59.678611 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.679683 master-0 kubenswrapper[4072]: I0223 13:00:59.679623 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.680408 master-0 kubenswrapper[4072]: I0223 13:00:59.679654 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:00:59.680566 master-0 kubenswrapper[4072]: I0223 13:00:59.680447 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:00:59.680922 master-0 kubenswrapper[4072]: I0223 13:00:59.680870 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:00:59.682435 master-0 kubenswrapper[4072]: I0223 13:00:59.682350 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms"] Feb 23 13:00:59.683171 master-0 kubenswrapper[4072]: I0223 13:00:59.683124 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.687333 master-0 kubenswrapper[4072]: I0223 13:00:59.687235 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:00:59.687645 master-0 kubenswrapper[4072]: I0223 13:00:59.687596 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:00:59.687917 master-0 kubenswrapper[4072]: I0223 13:00:59.687874 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:00:59.688050 master-0 kubenswrapper[4072]: I0223 13:00:59.687990 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:00:59.688174 master-0 kubenswrapper[4072]: I0223 13:00:59.688116 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:00:59.688360 master-0 kubenswrapper[4072]: I0223 13:00:59.688281 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl"] Feb 23 13:00:59.688914 master-0 kubenswrapper[4072]: I0223 13:00:59.688855 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.690475 master-0 kubenswrapper[4072]: I0223 13:00:59.690425 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:00:59.698835 master-0 kubenswrapper[4072]: I0223 13:00:59.698768 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.699159 master-0 kubenswrapper[4072]: I0223 13:00:59.699118 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:00:59.701626 master-0 kubenswrapper[4072]: I0223 13:00:59.699376 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:00:59.701626 master-0 kubenswrapper[4072]: I0223 13:00:59.699642 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:00:59.703108 master-0 kubenswrapper[4072]: I0223 13:00:59.703037 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj"] Feb 23 13:00:59.703796 master-0 kubenswrapper[4072]: I0223 13:00:59.703717 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86"] Feb 23 13:00:59.704362 master-0 kubenswrapper[4072]: I0223 13:00:59.704201 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.704513 master-0 kubenswrapper[4072]: I0223 13:00:59.704465 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.707283 master-0 kubenswrapper[4072]: I0223 13:00:59.706745 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74"] Feb 23 13:00:59.707283 master-0 kubenswrapper[4072]: I0223 13:00:59.707224 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8"] Feb 23 13:00:59.707674 master-0 kubenswrapper[4072]: I0223 13:00:59.707602 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.707924 master-0 kubenswrapper[4072]: I0223 13:00:59.707816 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.713001 master-0 kubenswrapper[4072]: I0223 13:00:59.712926 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924"] Feb 23 13:00:59.713885 master-0 kubenswrapper[4072]: I0223 13:00:59.713840 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:00:59.732563 master-0 kubenswrapper[4072]: I0223 13:00:59.731486 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-7466r"] Feb 23 13:00:59.735313 master-0 kubenswrapper[4072]: I0223 13:00:59.735274 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.735313 master-0 kubenswrapper[4072]: I0223 13:00:59.735322 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.735499 master-0 kubenswrapper[4072]: I0223 13:00:59.735365 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.735499 master-0 kubenswrapper[4072]: I0223 13:00:59.735390 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.735626 master-0 kubenswrapper[4072]: I0223 13:00:59.735491 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.735626 master-0 kubenswrapper[4072]: I0223 13:00:59.735551 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.735626 master-0 kubenswrapper[4072]: I0223 13:00:59.735565 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:00:59.735626 master-0 kubenswrapper[4072]: I0223 13:00:59.735584 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.735626 master-0 kubenswrapper[4072]: I0223 13:00:59.735629 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735661 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735684 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735710 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735719 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735749 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.735870 master-0 kubenswrapper[4072]: I0223 13:00:59.735795 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:00:59.736157 master-0 kubenswrapper[4072]: I0223 13:00:59.735966 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:00:59.736157 master-0 kubenswrapper[4072]: I0223 13:00:59.736034 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.736157 master-0 kubenswrapper[4072]: I0223 13:00:59.736096 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:00:59.736346 master-0 kubenswrapper[4072]: I0223 13:00:59.736210 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.736346 master-0 kubenswrapper[4072]: I0223 13:00:59.736342 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:00:59.737347 master-0 kubenswrapper[4072]: I0223 13:00:59.736803 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:00:59.737347 master-0 kubenswrapper[4072]: I0223 13:00:59.736979 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:00:59.737347 master-0 kubenswrapper[4072]: I0223 13:00:59.737001 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:00:59.737347 master-0 kubenswrapper[4072]: I0223 13:00:59.737139 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737465 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737494 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737525 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737825 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737842 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737972 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.737991 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8"] Feb 23 13:00:59.739451 master-0 kubenswrapper[4072]: I0223 13:00:59.738575 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.739862 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j"] Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.740295 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.740988 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-p5488"] Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.741761 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.743115 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-gswst"] Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.743552 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.744228 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd"] Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.744652 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.745572 master-0 kubenswrapper[4072]: I0223 13:00:59.745115 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:00:59.746156 master-0 kubenswrapper[4072]: I0223 13:00:59.745649 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:59.746156 master-0 kubenswrapper[4072]: I0223 13:00:59.745742 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:00:59.746156 master-0 kubenswrapper[4072]: I0223 13:00:59.745652 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:00:59.746156 master-0 kubenswrapper[4072]: I0223 13:00:59.745917 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.746771 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747018 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747133 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747188 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747286 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747341 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx"] Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.746776 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.748091 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.748097 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v"] Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.747394 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.748630 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.748693 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.752297 master-0 kubenswrapper[4072]: I0223 13:00:59.749735 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:00:59.753626 master-0 kubenswrapper[4072]: I0223 13:00:59.753585 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn"] Feb 23 13:00:59.753906 master-0 kubenswrapper[4072]: I0223 13:00:59.753876 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn"] Feb 23 13:00:59.754191 master-0 kubenswrapper[4072]: I0223 13:00:59.754161 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.756708 master-0 kubenswrapper[4072]: I0223 13:00:59.754528 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:00:59.756708 master-0 kubenswrapper[4072]: I0223 13:00:59.754619 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.756708 master-0 kubenswrapper[4072]: I0223 13:00:59.755854 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n"] Feb 23 13:00:59.756708 master-0 kubenswrapper[4072]: I0223 13:00:59.756453 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.756918 master-0 kubenswrapper[4072]: I0223 13:00:59.756835 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.760694 master-0 kubenswrapper[4072]: I0223 13:00:59.760587 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp"] Feb 23 13:00:59.762956 master-0 kubenswrapper[4072]: I0223 13:00:59.762690 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms"] Feb 23 13:00:59.763033 master-0 kubenswrapper[4072]: I0223 13:00:59.762962 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj"] Feb 23 13:00:59.763033 master-0 kubenswrapper[4072]: I0223 13:00:59.762976 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-28zcz"] Feb 23 13:00:59.763959 master-0 kubenswrapper[4072]: I0223 13:00:59.763922 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86"] Feb 23 13:00:59.764201 master-0 kubenswrapper[4072]: I0223 13:00:59.764169 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:00:59.766974 master-0 kubenswrapper[4072]: I0223 13:00:59.766926 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.767262 master-0 kubenswrapper[4072]: I0223 13:00:59.767211 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:00:59.767478 master-0 kubenswrapper[4072]: I0223 13:00:59.767444 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:00:59.767478 master-0 kubenswrapper[4072]: I0223 13:00:59.767468 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.767563 master-0 kubenswrapper[4072]: I0223 13:00:59.767545 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.775515 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.775883 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776052 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776159 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776209 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776472 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776609 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776628 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776718 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776844 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776940 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.776949 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.777032 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.777166 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.777173 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.777217 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:00:59.777373 master-0 kubenswrapper[4072]: I0223 13:00:59.777316 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:00:59.778134 master-0 kubenswrapper[4072]: I0223 13:00:59.777492 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:00:59.778134 master-0 kubenswrapper[4072]: I0223 13:00:59.777624 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:00:59.778134 master-0 kubenswrapper[4072]: I0223 13:00:59.777891 4072 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:00:59.791983 master-0 kubenswrapper[4072]: I0223 13:00:59.791159 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:00:59.791983 master-0 kubenswrapper[4072]: I0223 13:00:59.791926 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:00:59.799556 master-0 kubenswrapper[4072]: I0223 13:00:59.799501 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8"] Feb 23 13:00:59.800132 master-0 kubenswrapper[4072]: I0223 13:00:59.800104 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813559 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813594 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813606 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813618 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813631 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813644 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn"] Feb 23 13:00:59.814062 master-0 kubenswrapper[4072]: I0223 13:00:59.813657 4072 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-qd2ns"] Feb 23 13:00:59.814460 master-0 kubenswrapper[4072]: I0223 13:00:59.814146 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-p5488"] Feb 23 13:00:59.814460 master-0 kubenswrapper[4072]: I0223 13:00:59.814162 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl"] Feb 23 13:00:59.814460 master-0 kubenswrapper[4072]: I0223 13:00:59.814174 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j"] Feb 23 13:00:59.814460 master-0 kubenswrapper[4072]: I0223 13:00:59.814284 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:00:59.816978 master-0 kubenswrapper[4072]: I0223 13:00:59.800444 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:00:59.816978 master-0 kubenswrapper[4072]: I0223 13:00:59.803873 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:00:59.816978 master-0 kubenswrapper[4072]: I0223 13:00:59.813437 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:00:59.821349 master-0 kubenswrapper[4072]: I0223 13:00:59.821310 4072 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:00:59.821515 master-0 kubenswrapper[4072]: I0223 13:00:59.821485 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-7466r"] Feb 23 13:00:59.821566 master-0 kubenswrapper[4072]: I0223 13:00:59.821519 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8"] Feb 23 13:00:59.830057 master-0 kubenswrapper[4072]: I0223 13:00:59.829816 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn"] Feb 23 13:00:59.830626 master-0 kubenswrapper[4072]: I0223 13:00:59.830586 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:00:59.831334 master-0 kubenswrapper[4072]: I0223 13:00:59.831300 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-gswst"] Feb 23 13:00:59.836529 master-0 kubenswrapper[4072]: I0223 13:00:59.836489 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.836592 master-0 kubenswrapper[4072]: I0223 13:00:59.836539 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.836637 master-0 kubenswrapper[4072]: I0223 13:00:59.836609 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.836671 master-0 kubenswrapper[4072]: I0223 13:00:59.836650 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.836703 master-0 kubenswrapper[4072]: I0223 13:00:59.836673 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.836733 master-0 kubenswrapper[4072]: I0223 13:00:59.836702 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.836733 master-0 kubenswrapper[4072]: I0223 13:00:59.836722 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.836795 master-0 kubenswrapper[4072]: I0223 13:00:59.836752 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.836795 master-0 kubenswrapper[4072]: I0223 13:00:59.836773 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.836857 master-0 kubenswrapper[4072]: I0223 13:00:59.836793 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.836857 master-0 kubenswrapper[4072]: I0223 13:00:59.836813 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.836857 master-0 kubenswrapper[4072]: I0223 13:00:59.836834 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.836857 master-0 kubenswrapper[4072]: I0223 13:00:59.836850 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.836965 master-0 kubenswrapper[4072]: I0223 13:00:59.836942 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.836996 master-0 kubenswrapper[4072]: I0223 13:00:59.836964 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.837111 master-0 kubenswrapper[4072]: I0223 13:00:59.837063 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.837111 master-0 kubenswrapper[4072]: I0223 13:00:59.837087 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.837205 master-0 kubenswrapper[4072]: I0223 13:00:59.837155 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.837239 master-0 kubenswrapper[4072]: I0223 13:00:59.837219 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.837305 master-0 kubenswrapper[4072]: I0223 13:00:59.837269 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.837360 master-0 kubenswrapper[4072]: I0223 13:00:59.837314 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.837360 master-0 kubenswrapper[4072]: I0223 13:00:59.837341 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.837440 master-0 kubenswrapper[4072]: I0223 13:00:59.837362 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:00:59.837489 master-0 kubenswrapper[4072]: I0223 13:00:59.837442 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.837489 master-0 kubenswrapper[4072]: I0223 13:00:59.837476 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.837573 master-0 kubenswrapper[4072]: I0223 13:00:59.837495 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.837573 master-0 kubenswrapper[4072]: I0223 13:00:59.837526 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.837573 master-0 kubenswrapper[4072]: I0223 13:00:59.837565 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.837704 master-0 kubenswrapper[4072]: I0223 13:00:59.837583 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.837704 master-0 kubenswrapper[4072]: I0223 13:00:59.837632 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.837704 master-0 kubenswrapper[4072]: I0223 13:00:59.837654 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.837704 master-0 kubenswrapper[4072]: I0223 13:00:59.837672 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838274 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838327 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: E0223 13:00:59.838434 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838860 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838913 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838936 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838954 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838971 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.838988 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839007 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839031 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839049 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839073 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839089 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:00:59.841339 master-0 kubenswrapper[4072]: I0223 13:00:59.839107 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839131 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839148 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839200 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.339182409 +0000 UTC m=+148.149339021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839221 4072 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839575 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839628 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839626 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839672 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.339659073 +0000 UTC m=+148.149815685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839746 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839774 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: E0223 13:00:59.839817 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.339793507 +0000 UTC m=+148.149950339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839839 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839881 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.842106 master-0 kubenswrapper[4072]: I0223 13:00:59.839905 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.839957 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.839995 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840014 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840053 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840158 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: E0223 13:00:59.840201 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.340160398 +0000 UTC m=+148.150317010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840234 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840286 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840307 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840327 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840349 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840384 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840413 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.848700 master-0 kubenswrapper[4072]: I0223 13:00:59.840431 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.851806 master-0 kubenswrapper[4072]: I0223 13:00:59.840449 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.851806 master-0 kubenswrapper[4072]: I0223 13:00:59.840484 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.851806 master-0 kubenswrapper[4072]: I0223 13:00:59.845730 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.861313 master-0 kubenswrapper[4072]: I0223 13:00:59.861272 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:00:59.861587 master-0 kubenswrapper[4072]: I0223 13:00:59.861546 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:00:59.861955 master-0 kubenswrapper[4072]: I0223 13:00:59.861915 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:00:59.864546 master-0 kubenswrapper[4072]: I0223 13:00:59.864497 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:00:59.941482 master-0 kubenswrapper[4072]: I0223 13:00:59.941421 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.941598 master-0 kubenswrapper[4072]: I0223 13:00:59.941494 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.941598 master-0 kubenswrapper[4072]: I0223 13:00:59.941531 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.941598 master-0 kubenswrapper[4072]: I0223 13:00:59.941568 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.941736 master-0 kubenswrapper[4072]: I0223 13:00:59.941609 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:00:59.941736 master-0 kubenswrapper[4072]: I0223 13:00:59.941648 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.941736 master-0 kubenswrapper[4072]: I0223 13:00:59.941703 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.942577 master-0 kubenswrapper[4072]: I0223 13:00:59.942538 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.942630 master-0 kubenswrapper[4072]: I0223 13:00:59.942586 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.942630 master-0 kubenswrapper[4072]: I0223 13:00:59.942613 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.942707 master-0 kubenswrapper[4072]: I0223 13:00:59.942640 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.942707 master-0 kubenswrapper[4072]: I0223 13:00:59.942663 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.942707 master-0 kubenswrapper[4072]: I0223 13:00:59.942687 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.942829 master-0 kubenswrapper[4072]: I0223 13:00:59.942726 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.942829 master-0 kubenswrapper[4072]: I0223 13:00:59.942763 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.942829 master-0 kubenswrapper[4072]: I0223 13:00:59.942786 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.942829 master-0 kubenswrapper[4072]: I0223 13:00:59.942809 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.943144 master-0 kubenswrapper[4072]: I0223 13:00:59.942832 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.943144 master-0 kubenswrapper[4072]: I0223 13:00:59.942856 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.943144 master-0 kubenswrapper[4072]: I0223 13:00:59.942880 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943324 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943389 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943426 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943461 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943493 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943526 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943557 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943595 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943599 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943628 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.943670 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.944172 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: E0223 13:00:59.944353 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: E0223 13:00:59.944420 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.444399706 +0000 UTC m=+148.254556338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: I0223 13:00:59.944611 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.948039 master-0 kubenswrapper[4072]: E0223 13:00:59.944630 4072 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: E0223 13:00:59.944697 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.444679984 +0000 UTC m=+148.254836596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.944700 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: E0223 13:00:59.944734 4072 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: E0223 13:00:59.944792 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.444781547 +0000 UTC m=+148.254938159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.944816 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.944866 4072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.944947 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.944967 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945038 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945056 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945072 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945128 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945155 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.948744 master-0 kubenswrapper[4072]: I0223 13:00:59.945195 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945223 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945270 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945308 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945355 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945373 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945390 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945408 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945459 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945476 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: E0223 13:00:59.945480 4072 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945516 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: E0223 13:00:59.945523 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.445510419 +0000 UTC m=+148.255667031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945542 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945572 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.950930 master-0 kubenswrapper[4072]: I0223 13:00:59.945601 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.945629 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.946551 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.946712 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: E0223 13:00:59.946785 4072 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947048 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947133 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947180 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947234 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: E0223 13:00:59.947339 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.447325874 +0000 UTC m=+148.257482586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: E0223 13:00:59.947126 4072 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947381 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: E0223 13:00:59.947415 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:00.447397966 +0000 UTC m=+148.257554578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947805 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.947927 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.951557 master-0 kubenswrapper[4072]: I0223 13:00:59.948140 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.948321 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.948509 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.948547 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.948580 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.948755 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.949282 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.949280 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.949457 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.950030 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.950087 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.950793 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.950862 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.950910 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.951156 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:00:59.952165 master-0 kubenswrapper[4072]: I0223 13:00:59.951780 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.953019 master-0 kubenswrapper[4072]: I0223 13:00:59.951991 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.953139 master-0 kubenswrapper[4072]: I0223 13:00:59.953095 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:00:59.954081 master-0 kubenswrapper[4072]: I0223 13:00:59.954037 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.958489 master-0 kubenswrapper[4072]: I0223 13:00:59.958448 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:00:59.980332 master-0 kubenswrapper[4072]: I0223 13:00:59.980209 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:00:59.981061 master-0 kubenswrapper[4072]: I0223 13:00:59.980679 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.981061 master-0 kubenswrapper[4072]: I0223 13:00:59.981057 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.982253 master-0 kubenswrapper[4072]: I0223 13:00:59.982189 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:00:59.982372 master-0 kubenswrapper[4072]: I0223 13:00:59.982326 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:00:59.983591 master-0 kubenswrapper[4072]: I0223 13:00:59.983554 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:00:59.983890 master-0 kubenswrapper[4072]: I0223 13:00:59.983842 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:00:59.984051 master-0 kubenswrapper[4072]: I0223 13:00:59.984011 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:00:59.985299 master-0 kubenswrapper[4072]: I0223 13:00:59.985233 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:00:59.986028 master-0 kubenswrapper[4072]: I0223 13:00:59.985981 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:00:59.995005 master-0 kubenswrapper[4072]: I0223 13:00:59.994983 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:00.007208 master-0 kubenswrapper[4072]: I0223 13:01:00.007186 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:00.009522 master-0 kubenswrapper[4072]: I0223 13:01:00.009501 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:00.014522 master-0 kubenswrapper[4072]: I0223 13:01:00.014502 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:00.022651 master-0 kubenswrapper[4072]: I0223 13:01:00.022608 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:00.034882 master-0 kubenswrapper[4072]: I0223 13:01:00.034843 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:00.038204 master-0 kubenswrapper[4072]: I0223 13:01:00.038169 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:00.050283 master-0 kubenswrapper[4072]: I0223 13:01:00.050185 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.051712 master-0 kubenswrapper[4072]: I0223 13:01:00.050769 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.051712 master-0 kubenswrapper[4072]: I0223 13:01:00.050800 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.051712 master-0 kubenswrapper[4072]: I0223 13:01:00.051124 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.051712 master-0 kubenswrapper[4072]: I0223 13:01:00.051270 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.059704 master-0 kubenswrapper[4072]: I0223 13:01:00.059671 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:00.076402 master-0 kubenswrapper[4072]: I0223 13:01:00.076367 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:00.097695 master-0 kubenswrapper[4072]: I0223 13:01:00.097438 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:01:00.124176 master-0 kubenswrapper[4072]: I0223 13:01:00.124141 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:00.147398 master-0 kubenswrapper[4072]: I0223 13:01:00.147282 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:00.175506 master-0 kubenswrapper[4072]: I0223 13:01:00.174136 4072 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.180815 master-0 kubenswrapper[4072]: I0223 13:01:00.180733 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:00.188985 master-0 kubenswrapper[4072]: I0223 13:01:00.186840 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:00.204895 master-0 kubenswrapper[4072]: I0223 13:01:00.204519 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:00.210437 master-0 kubenswrapper[4072]: I0223 13:01:00.210394 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:01:00.231326 master-0 kubenswrapper[4072]: I0223 13:01:00.230525 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:00.240266 master-0 kubenswrapper[4072]: I0223 13:01:00.240006 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:00.241831 master-0 kubenswrapper[4072]: I0223 13:01:00.241608 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp"] Feb 23 13:01:00.247632 master-0 kubenswrapper[4072]: W0223 13:01:00.247593 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b5540c_da7d_4b6f_a15f_394451f4674e.slice/crio-497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636 WatchSource:0}: Error finding container 497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636: Status 404 returned error can't find the container with id 497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636 Feb 23 13:01:00.294261 master-0 kubenswrapper[4072]: I0223 13:01:00.293588 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:00.312654 master-0 kubenswrapper[4072]: I0223 13:01:00.311140 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:00.350058 master-0 kubenswrapper[4072]: I0223 13:01:00.350016 4072 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: I0223 13:01:00.354772 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: I0223 13:01:00.354819 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.354942 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.354998 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.354978626 +0000 UTC m=+149.165135228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: I0223 13:01:00.355340 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: I0223 13:01:00.355441 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355544 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355569 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.355561094 +0000 UTC m=+149.165717706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355618 4072 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355640 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.355633556 +0000 UTC m=+149.165790168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355674 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:00.359215 master-0 kubenswrapper[4072]: E0223 13:01:00.355692 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.355686977 +0000 UTC m=+149.165843589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:00.366304 master-0 kubenswrapper[4072]: W0223 13:01:00.366222 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048f4455_d99a_407b_8674_60efc7aa6ecb.slice/crio-bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a WatchSource:0}: Error finding container bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a: Status 404 returned error can't find the container with id bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.456721 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.456761 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.456790 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.456886 4072 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.456934 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.456920285 +0000 UTC m=+149.267076897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.456973 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.456997 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457007 4072 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457041 4072 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: I0223 13:01:00.457016 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457088 4072 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457091 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.45707691 +0000 UTC m=+149.267233532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457054 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457188 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.457177363 +0000 UTC m=+149.267333985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457120 4072 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:00.465586 master-0 kubenswrapper[4072]: E0223 13:01:00.457280 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.457271296 +0000 UTC m=+149.267427928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:00.466667 master-0 kubenswrapper[4072]: E0223 13:01:00.457299 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.457291286 +0000 UTC m=+149.267447918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:00.466667 master-0 kubenswrapper[4072]: E0223 13:01:00.457316 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:01.457307867 +0000 UTC m=+149.267464489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:00.754327 master-0 kubenswrapper[4072]: I0223 13:01:00.754079 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qd2ns" event={"ID":"048f4455-d99a-407b-8674-60efc7aa6ecb","Type":"ContainerStarted","Data":"bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a"} Feb 23 13:01:00.755729 master-0 kubenswrapper[4072]: I0223 13:01:00.755630 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerStarted","Data":"497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636"} Feb 23 13:01:00.909336 master-0 kubenswrapper[4072]: I0223 13:01:00.907964 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n"] Feb 23 13:01:00.910176 master-0 kubenswrapper[4072]: I0223 13:01:00.910109 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj"] Feb 23 13:01:00.917362 master-0 kubenswrapper[4072]: I0223 13:01:00.917275 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-p5488"] Feb 23 13:01:00.923328 master-0 kubenswrapper[4072]: I0223 13:01:00.918888 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn"] Feb 23 13:01:00.923328 master-0 kubenswrapper[4072]: I0223 13:01:00.923208 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86"] Feb 23 13:01:00.928760 master-0 kubenswrapper[4072]: I0223 13:01:00.927601 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924"] Feb 23 13:01:00.929417 master-0 kubenswrapper[4072]: I0223 13:01:00.928900 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8"] Feb 23 13:01:00.934001 master-0 kubenswrapper[4072]: W0223 13:01:00.933436 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae1799b6_85b0_4aed_8835_35cb3d8d1109.slice/crio-5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5 WatchSource:0}: Error finding container 5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5: Status 404 returned error can't find the container with id 5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5 Feb 23 13:01:01.194355 master-0 kubenswrapper[4072]: I0223 13:01:01.194230 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j"] Feb 23 13:01:01.200723 master-0 kubenswrapper[4072]: I0223 13:01:01.199312 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8"] Feb 23 13:01:01.202733 master-0 kubenswrapper[4072]: I0223 13:01:01.202361 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx"] Feb 23 13:01:01.208884 master-0 kubenswrapper[4072]: W0223 13:01:01.208781 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03da8bbe_c1b1_4f3f_acec_d1dd0c8afae4.slice/crio-3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c WatchSource:0}: Error finding container 3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c: Status 404 returned error can't find the container with id 3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c Feb 23 13:01:01.219651 master-0 kubenswrapper[4072]: I0223 13:01:01.219600 4072 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn"] Feb 23 13:01:01.221677 master-0 kubenswrapper[4072]: W0223 13:01:01.221626 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24dab1bc_cf56_429b_93ce_911970c41b5c.slice/crio-6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be WatchSource:0}: Error finding container 6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be: Status 404 returned error can't find the container with id 6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be Feb 23 13:01:01.224609 master-0 kubenswrapper[4072]: W0223 13:01:01.224571 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99399ebb_c95f_4663_b3b6_f5dfabf47fcf.slice/crio-0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65 WatchSource:0}: Error finding container 0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65: Status 404 returned error can't find the container with id 0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65 Feb 23 13:01:01.239480 master-0 kubenswrapper[4072]: W0223 13:01:01.239401 4072 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a4b185e_17da_4711_a7b2_c2a9e1cd7b30.slice/crio-7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940 WatchSource:0}: Error finding container 7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940: Status 404 returned error can't find the container with id 7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940 Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: I0223 13:01:01.367981 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: I0223 13:01:01.368092 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: I0223 13:01:01.368150 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: I0223 13:01:01.368323 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: E0223 13:01:01.368346 4072 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: E0223 13:01:01.368438 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.368414536 +0000 UTC m=+151.178571158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:01.368416 master-0 kubenswrapper[4072]: E0223 13:01:01.368347 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:01.369194 master-0 kubenswrapper[4072]: E0223 13:01:01.368480 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.368472407 +0000 UTC m=+151.178629029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:01.369194 master-0 kubenswrapper[4072]: E0223 13:01:01.368513 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:01.369194 master-0 kubenswrapper[4072]: E0223 13:01:01.368629 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.368598431 +0000 UTC m=+151.178755053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:01.372612 master-0 kubenswrapper[4072]: E0223 13:01:01.372540 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:01.372737 master-0 kubenswrapper[4072]: E0223 13:01:01.372700 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.372664663 +0000 UTC m=+151.182821305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:01.470152 master-0 kubenswrapper[4072]: I0223 13:01:01.469997 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:01.470152 master-0 kubenswrapper[4072]: I0223 13:01:01.470074 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:01.470549 master-0 kubenswrapper[4072]: I0223 13:01:01.470282 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:01.470549 master-0 kubenswrapper[4072]: E0223 13:01:01.470295 4072 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:01.470549 master-0 kubenswrapper[4072]: E0223 13:01:01.470405 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.470378955 +0000 UTC m=+151.280535577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:01.470549 master-0 kubenswrapper[4072]: E0223 13:01:01.470443 4072 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:01.470549 master-0 kubenswrapper[4072]: E0223 13:01:01.470531 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.470508169 +0000 UTC m=+151.280664791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:01.470771 master-0 kubenswrapper[4072]: I0223 13:01:01.470661 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:01.470771 master-0 kubenswrapper[4072]: I0223 13:01:01.470719 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:01.470771 master-0 kubenswrapper[4072]: I0223 13:01:01.470749 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:01.470886 master-0 kubenswrapper[4072]: E0223 13:01:01.470871 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:01.470925 master-0 kubenswrapper[4072]: E0223 13:01:01.470899 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.470889971 +0000 UTC m=+151.281046593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:01.470967 master-0 kubenswrapper[4072]: E0223 13:01:01.470951 4072 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:01.471009 master-0 kubenswrapper[4072]: E0223 13:01:01.470977 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.470968893 +0000 UTC m=+151.281125515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:01.471089 master-0 kubenswrapper[4072]: E0223 13:01:01.471066 4072 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:01.472060 master-0 kubenswrapper[4072]: E0223 13:01:01.471108 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.471096587 +0000 UTC m=+151.281253209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:01.472060 master-0 kubenswrapper[4072]: E0223 13:01:01.471125 4072 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:01.472153 master-0 kubenswrapper[4072]: E0223 13:01:01.472101 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:03.472088177 +0000 UTC m=+151.282244799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:02.261820 master-0 kubenswrapper[4072]: I0223 13:01:02.261147 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:02.261820 master-0 kubenswrapper[4072]: E0223 13:01:02.261320 4072 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:02.261820 master-0 kubenswrapper[4072]: E0223 13:01:02.261418 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:06.261394121 +0000 UTC m=+214.071550733 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:02.274124 master-0 kubenswrapper[4072]: I0223 13:01:02.268174 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" event={"ID":"0a80d5ac-27ce-4ba9-809e-28c86b80163b","Type":"ContainerStarted","Data":"4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9"} Feb 23 13:01:02.274124 master-0 kubenswrapper[4072]: I0223 13:01:02.269548 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerStarted","Data":"7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940"} Feb 23 13:01:02.274124 master-0 kubenswrapper[4072]: I0223 13:01:02.272520 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerStarted","Data":"ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a"} Feb 23 13:01:02.305420 master-0 kubenswrapper[4072]: I0223 13:01:02.305358 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a"} Feb 23 13:01:02.307419 master-0 kubenswrapper[4072]: I0223 13:01:02.307391 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerStarted","Data":"3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c"} Feb 23 13:01:02.309872 master-0 kubenswrapper[4072]: I0223 13:01:02.309828 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d"} Feb 23 13:01:02.311303 master-0 kubenswrapper[4072]: I0223 13:01:02.311274 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerStarted","Data":"5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5"} Feb 23 13:01:02.315131 master-0 kubenswrapper[4072]: I0223 13:01:02.315101 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerStarted","Data":"cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77"} Feb 23 13:01:02.318658 master-0 kubenswrapper[4072]: I0223 13:01:02.318630 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerStarted","Data":"6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c"} Feb 23 13:01:02.321115 master-0 kubenswrapper[4072]: I0223 13:01:02.321083 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerStarted","Data":"6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be"} Feb 23 13:01:02.323445 master-0 kubenswrapper[4072]: I0223 13:01:02.323410 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerStarted","Data":"0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65"} Feb 23 13:01:03.337613 master-0 kubenswrapper[4072]: I0223 13:01:03.337558 4072 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerStarted","Data":"fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb"} Feb 23 13:01:03.363402 master-0 kubenswrapper[4072]: I0223 13:01:03.363336 4072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" podStartSLOduration=116.363318795 podStartE2EDuration="1m56.363318795s" podCreationTimestamp="2026-02-23 12:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:03.361710797 +0000 UTC m=+151.171867419" watchObservedRunningTime="2026-02-23 13:01:03.363318795 +0000 UTC m=+151.173475397" Feb 23 13:01:03.373895 master-0 kubenswrapper[4072]: I0223 13:01:03.373816 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:03.380066 master-0 kubenswrapper[4072]: E0223 13:01:03.380025 4072 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:03.380234 master-0 kubenswrapper[4072]: E0223 13:01:03.380111 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.380092588 +0000 UTC m=+155.190249200 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:03.380234 master-0 kubenswrapper[4072]: E0223 13:01:03.380207 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:03.380380 master-0 kubenswrapper[4072]: I0223 13:01:03.380228 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:03.380380 master-0 kubenswrapper[4072]: E0223 13:01:03.380300 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.380280634 +0000 UTC m=+155.190437246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:03.380570 master-0 kubenswrapper[4072]: I0223 13:01:03.380518 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:03.380625 master-0 kubenswrapper[4072]: E0223 13:01:03.380616 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:03.380625 master-0 kubenswrapper[4072]: I0223 13:01:03.380620 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:03.380706 master-0 kubenswrapper[4072]: E0223 13:01:03.380652 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.380642085 +0000 UTC m=+155.190798697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:03.380841 master-0 kubenswrapper[4072]: E0223 13:01:03.380803 4072 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:03.381107 master-0 kubenswrapper[4072]: E0223 13:01:03.381066 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.380948184 +0000 UTC m=+155.191104866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:04.156649 master-0 kubenswrapper[4072]: I0223 13:01:04.156596 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:04.156857 master-0 kubenswrapper[4072]: I0223 13:01:04.156659 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:04.156857 master-0 kubenswrapper[4072]: I0223 13:01:04.156845 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:04.157329 master-0 kubenswrapper[4072]: E0223 13:01:04.156865 4072 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:04.157382 master-0 kubenswrapper[4072]: E0223 13:01:04.157026 4072 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:04.159532 master-0 kubenswrapper[4072]: E0223 13:01:04.159488 4072 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:04.159720 master-0 kubenswrapper[4072]: E0223 13:01:04.159698 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.159676986 +0000 UTC m=+155.969833598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:04.159797 master-0 kubenswrapper[4072]: I0223 13:01:04.157109 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:04.159862 master-0 kubenswrapper[4072]: I0223 13:01:04.159842 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:04.159899 master-0 kubenswrapper[4072]: I0223 13:01:04.159887 4072 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:04.160008 master-0 kubenswrapper[4072]: E0223 13:01:04.159990 4072 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:04.160039 master-0 kubenswrapper[4072]: E0223 13:01:04.160022 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.160015416 +0000 UTC m=+155.970172028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:04.160081 master-0 kubenswrapper[4072]: E0223 13:01:04.160066 4072 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:04.160115 master-0 kubenswrapper[4072]: E0223 13:01:04.160088 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.160081348 +0000 UTC m=+155.970237960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:04.163307 master-0 kubenswrapper[4072]: E0223 13:01:04.163271 4072 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:04.163424 master-0 kubenswrapper[4072]: E0223 13:01:04.163376 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.163352746 +0000 UTC m=+155.973509358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:04.163685 master-0 kubenswrapper[4072]: E0223 13:01:04.163631 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.163612414 +0000 UTC m=+155.973769076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:04.163737 master-0 kubenswrapper[4072]: E0223 13:01:04.163710 4072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.163702707 +0000 UTC m=+155.973859319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:05.885411 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 13:01:05.907895 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 23 13:01:05.908138 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 13:01:05.909716 master-0 systemd[1]: kubelet.service: Consumed 11.559s CPU time. Feb 23 13:01:05.929779 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 13:01:06.047613 master-0 kubenswrapper[7845]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:01:06.049602 master-0 kubenswrapper[7845]: I0223 13:01:06.047724 7845 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050702 7845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050721 7845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050726 7845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050730 7845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050735 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050739 7845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050743 7845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050747 7845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050752 7845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050757 7845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050762 7845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050768 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:01:06.050741 master-0 kubenswrapper[7845]: W0223 13:01:06.050773 7845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050778 7845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050782 7845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050786 7845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050790 7845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050795 7845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050800 7845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050804 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050807 7845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050811 7845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050815 7845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050819 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050824 7845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050829 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050864 7845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050868 7845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050873 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050877 7845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050882 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:01:06.051625 master-0 kubenswrapper[7845]: W0223 13:01:06.050885 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050889 7845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050893 7845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050897 7845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050902 7845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050906 7845 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050910 7845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050915 7845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050918 7845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050923 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050926 7845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050930 7845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050934 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050938 7845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050944 7845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050949 7845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050953 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050957 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050961 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050965 7845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:01:06.052888 master-0 kubenswrapper[7845]: W0223 13:01:06.050969 7845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050974 7845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050978 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050983 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050987 7845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050991 7845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050995 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.050999 7845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051004 7845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051008 7845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051011 7845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051016 7845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051020 7845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051024 7845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051027 7845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051031 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051034 7845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051038 7845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051041 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051044 7845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:01:06.054099 master-0 kubenswrapper[7845]: W0223 13:01:06.051049 7845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051120 7845 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051130 7845 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051140 7845 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051147 7845 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051154 7845 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051158 7845 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051164 7845 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051169 7845 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051174 7845 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051179 7845 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051183 7845 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051188 7845 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051192 7845 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051196 7845 flags.go:64] FLAG: --cgroup-root="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051200 7845 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051204 7845 flags.go:64] FLAG: --client-ca-file="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051208 7845 flags.go:64] FLAG: --cloud-config="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051212 7845 flags.go:64] FLAG: --cloud-provider="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051216 7845 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051221 7845 flags.go:64] FLAG: --cluster-domain="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051225 7845 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051229 7845 flags.go:64] FLAG: --config-dir="" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051233 7845 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051252 7845 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 13:01:06.055470 master-0 kubenswrapper[7845]: I0223 13:01:06.051258 7845 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051262 7845 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051266 7845 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051298 7845 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051303 7845 flags.go:64] FLAG: --contention-profiling="false" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051307 7845 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051311 7845 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051334 7845 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051338 7845 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051344 7845 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051348 7845 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051354 7845 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051358 7845 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051363 7845 flags.go:64] FLAG: --enable-server="true" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051367 7845 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051374 7845 flags.go:64] FLAG: --event-burst="100" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051379 7845 flags.go:64] FLAG: --event-qps="50" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051383 7845 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051388 7845 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051392 7845 flags.go:64] FLAG: --eviction-hard="" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051397 7845 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051401 7845 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051405 7845 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051410 7845 flags.go:64] FLAG: --eviction-soft="" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051414 7845 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 13:01:06.057042 master-0 kubenswrapper[7845]: I0223 13:01:06.051418 7845 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051423 7845 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051427 7845 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051431 7845 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051436 7845 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051440 7845 flags.go:64] FLAG: --feature-gates="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051445 7845 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051450 7845 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051454 7845 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051459 7845 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051463 7845 flags.go:64] FLAG: --healthz-port="10248" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051467 7845 flags.go:64] FLAG: --help="false" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051471 7845 flags.go:64] FLAG: --hostname-override="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051475 7845 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051480 7845 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051484 7845 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051488 7845 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051493 7845 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051497 7845 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051501 7845 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051505 7845 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051517 7845 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051522 7845 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051527 7845 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051531 7845 flags.go:64] FLAG: --kube-reserved="" Feb 23 13:01:06.058503 master-0 kubenswrapper[7845]: I0223 13:01:06.051535 7845 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051539 7845 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051543 7845 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051547 7845 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051551 7845 flags.go:64] FLAG: --lock-file="" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051555 7845 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051559 7845 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051564 7845 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051571 7845 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051576 7845 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051580 7845 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051584 7845 flags.go:64] FLAG: --logging-format="text" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051588 7845 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051592 7845 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051596 7845 flags.go:64] FLAG: --manifest-url="" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051600 7845 flags.go:64] FLAG: --manifest-url-header="" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051605 7845 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051610 7845 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051615 7845 flags.go:64] FLAG: --max-pods="110" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051619 7845 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051623 7845 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051628 7845 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051632 7845 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051637 7845 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 13:01:06.059834 master-0 kubenswrapper[7845]: I0223 13:01:06.051642 7845 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051647 7845 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051656 7845 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051660 7845 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051667 7845 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051672 7845 flags.go:64] FLAG: --pod-cidr="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051676 7845 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051682 7845 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051687 7845 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051691 7845 flags.go:64] FLAG: --pods-per-core="0" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051696 7845 flags.go:64] FLAG: --port="10250" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051700 7845 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051704 7845 flags.go:64] FLAG: --provider-id="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051740 7845 flags.go:64] FLAG: --qos-reserved="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051745 7845 flags.go:64] FLAG: --read-only-port="10255" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051749 7845 flags.go:64] FLAG: --register-node="true" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051754 7845 flags.go:64] FLAG: --register-schedulable="true" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051759 7845 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051765 7845 flags.go:64] FLAG: --registry-burst="10" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051769 7845 flags.go:64] FLAG: --registry-qps="5" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051773 7845 flags.go:64] FLAG: --reserved-cpus="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051778 7845 flags.go:64] FLAG: --reserved-memory="" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051783 7845 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051787 7845 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 13:01:06.061065 master-0 kubenswrapper[7845]: I0223 13:01:06.051791 7845 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051795 7845 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051799 7845 flags.go:64] FLAG: --runonce="false" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051803 7845 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051807 7845 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051812 7845 flags.go:64] FLAG: --seccomp-default="false" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051816 7845 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051820 7845 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051825 7845 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051829 7845 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051833 7845 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051837 7845 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051841 7845 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051849 7845 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051852 7845 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051857 7845 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051861 7845 flags.go:64] FLAG: --system-cgroups="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051865 7845 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051871 7845 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051875 7845 flags.go:64] FLAG: --tls-cert-file="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051879 7845 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051885 7845 flags.go:64] FLAG: --tls-min-version="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051889 7845 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051893 7845 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051897 7845 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 13:01:06.062352 master-0 kubenswrapper[7845]: I0223 13:01:06.051901 7845 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: I0223 13:01:06.051905 7845 flags.go:64] FLAG: --v="2" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: I0223 13:01:06.051911 7845 flags.go:64] FLAG: --version="false" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: I0223 13:01:06.051916 7845 flags.go:64] FLAG: --vmodule="" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: I0223 13:01:06.051921 7845 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: I0223 13:01:06.051925 7845 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052034 7845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052039 7845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052044 7845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052047 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052052 7845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052055 7845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052059 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052063 7845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052066 7845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052070 7845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052073 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052077 7845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052081 7845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052084 7845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052091 7845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:01:06.063771 master-0 kubenswrapper[7845]: W0223 13:01:06.052095 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052098 7845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052103 7845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052108 7845 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052111 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052116 7845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052121 7845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052125 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052130 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052134 7845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052139 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052144 7845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052149 7845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052153 7845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052194 7845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052199 7845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052202 7845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052206 7845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052210 7845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052213 7845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:01:06.065067 master-0 kubenswrapper[7845]: W0223 13:01:06.052217 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052220 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052225 7845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052230 7845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052233 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052237 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052252 7845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052256 7845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052260 7845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052263 7845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052266 7845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052275 7845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052278 7845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052282 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052286 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052289 7845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052293 7845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052296 7845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052300 7845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:01:06.066278 master-0 kubenswrapper[7845]: W0223 13:01:06.052303 7845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052308 7845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052312 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052316 7845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052320 7845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052323 7845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052327 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052330 7845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052336 7845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052339 7845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052343 7845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052347 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052350 7845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052354 7845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052358 7845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052361 7845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052365 7845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:01:06.067643 master-0 kubenswrapper[7845]: W0223 13:01:06.052368 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: I0223 13:01:06.052383 7845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: I0223 13:01:06.065310 7845 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: I0223 13:01:06.065345 7845 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065556 7845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065566 7845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065573 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065578 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065583 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065588 7845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065594 7845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065598 7845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065603 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065607 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065612 7845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:01:06.069698 master-0 kubenswrapper[7845]: W0223 13:01:06.065617 7845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065622 7845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065630 7845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065636 7845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065645 7845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065651 7845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065656 7845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065661 7845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065667 7845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065672 7845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065677 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065682 7845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065688 7845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065692 7845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065709 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065714 7845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065720 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065725 7845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065730 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065734 7845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:01:06.070759 master-0 kubenswrapper[7845]: W0223 13:01:06.065741 7845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065747 7845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065752 7845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065757 7845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065762 7845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065766 7845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065774 7845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065779 7845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065783 7845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065788 7845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065792 7845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065797 7845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065802 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065806 7845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065811 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065816 7845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065820 7845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065825 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065863 7845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065871 7845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:01:06.072189 master-0 kubenswrapper[7845]: W0223 13:01:06.065876 7845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065880 7845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065885 7845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065890 7845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065896 7845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065901 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065907 7845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065913 7845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065918 7845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065923 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065937 7845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065945 7845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065951 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065956 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065961 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065966 7845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065972 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065977 7845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065982 7845 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:01:06.073777 master-0 kubenswrapper[7845]: W0223 13:01:06.065988 7845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.065994 7845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: I0223 13:01:06.066002 7845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066687 7845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066700 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066710 7845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066715 7845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066719 7845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066724 7845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066728 7845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066733 7845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066738 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066742 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066747 7845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066751 7845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:01:06.074867 master-0 kubenswrapper[7845]: W0223 13:01:06.066756 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066761 7845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066766 7845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066774 7845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066779 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066784 7845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066791 7845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066797 7845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066802 7845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066806 7845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066811 7845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066816 7845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066830 7845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066835 7845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066840 7845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066848 7845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066852 7845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066857 7845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066862 7845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066866 7845 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:01:06.075704 master-0 kubenswrapper[7845]: W0223 13:01:06.066871 7845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066876 7845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066881 7845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066887 7845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066894 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066898 7845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066905 7845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066914 7845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066920 7845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066925 7845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066931 7845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066937 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066943 7845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066948 7845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066954 7845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066959 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066964 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066970 7845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066976 7845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:01:06.076909 master-0 kubenswrapper[7845]: W0223 13:01:06.066993 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.066998 7845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067003 7845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067008 7845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067013 7845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067017 7845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067022 7845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067068 7845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067073 7845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067085 7845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067090 7845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067097 7845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067103 7845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067111 7845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067117 7845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067122 7845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067127 7845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067131 7845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067136 7845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067141 7845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:01:06.078068 master-0 kubenswrapper[7845]: W0223 13:01:06.067145 7845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.067154 7845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.067610 7845 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.071998 7845 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.072120 7845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.072455 7845 server.go:997] "Starting client certificate rotation" Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.072469 7845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.072673 7845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 09:23:55.112407557 +0000 UTC Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.072756 7845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h22m49.039654092s for next certificate rotation Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.073478 7845 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.075159 7845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:01:06.079185 master-0 kubenswrapper[7845]: I0223 13:01:06.079165 7845 log.go:25] "Validated CRI v1 runtime API" Feb 23 13:01:06.081649 master-0 kubenswrapper[7845]: I0223 13:01:06.081610 7845 log.go:25] "Validated CRI v1 image API" Feb 23 13:01:06.082919 master-0 kubenswrapper[7845]: I0223 13:01:06.082876 7845 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 13:01:06.086969 master-0 kubenswrapper[7845]: I0223 13:01:06.086767 7845 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 a0645d8c-797c-4e96-9069-34c436b1201e:/dev/vda3] Feb 23 13:01:06.087817 master-0 kubenswrapper[7845]: I0223 13:01:06.086967 7845 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm major:0 minor:152 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm major:0 minor:149 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n:{mountpoint:/var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf:{mountpoint:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h:{mountpoint:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97:{mountpoint:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2:{mountpoint:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx:{mountpoint:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t:{mountpoint:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf:{mountpoint:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c:{mountpoint:/var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7:{mountpoint:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7 major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:77 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr:{mountpoint:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l:{mountpoint:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f:{mountpoint:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r:{mountpoint:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b053c311-07fd-45bb-ab10-6e7b76c9aa48/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b053c311-07fd-45bb-ab10-6e7b76c9aa48/volumes/kubernetes.io~projected/kube-api-access major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d:{mountpoint:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht:{mountpoint:/var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl:{mountpoint:/var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q:{mountpoint:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv:{mountpoint:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p:{mountpoint:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f:{mountpoint:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb:{mountpoint:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2:{mountpoint:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2 major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg:{mountpoint:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9 major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:143 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/eaec3fa7b549d085042c44ed7575928bdb25d1c07a7d73cceb9d49b07bfb0ed2/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/0f111b6188848e42258d030d5c821b753c1543f987fa429c04aa49fc9e45a6a1/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/eda31bd64232f2fc513027432fe3f1a02c61461496cd24560719892bd08b0ea8/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3504739513dcef5ea5f997f7f54490b36abdb180c144d1b2eb9f1a5ae49127bd/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/39b5a986db28fbc83ade6171bebca27200498397b78e7b995db5d9fb68ca124e/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/6cb48805035e2d3e7d5113af6595bc4aef90f654c5e6e8cc36bbabff782da139/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/2b332784aade244fe5c1f6d676924b1fbb243e1942b7407169a866b9856106d0/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/5cebfd03d4baf76b0d45f3b0cec14b6512d7e092b591ad99d8f80688d9cee1b2/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/7c93767e921d5ddd69cffe13aa5a765b69129d0d97b6c43d4c75feccb6623271/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/23e3f8a797cf2694217d59bf7f1c99e0c180911040e89633e3af32abf4b315c9/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/98d9d67f97c4a23061b9e78102cad14235b6e99706d583d5c32fa72b5afb497a/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/df0fba76b984c37e21199b12db39a7a49a598237a62b2a3564663df70a129289/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/36624997f9e68be643f40a9407e2f49305b3e1d23b019ff1721e9ff87c4f4ebf/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/af016d2c244495ff1a47195db819f82c459adc9449f3eef8624495189743b219/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/3e865b4082a97cff1107f297a6bf94d930326aae892b8f7e5a4fd8bee5f59e24/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/40024e92907f505ee77eefe0a75f53b88f3cc191e0b1f793db2301afd4ebc63f/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/53fe9b890199404f1365a28adaf7f737d7d253ee31601701e80ee9e49dab06b6/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/08bdf389964dbf16b8de2fd3de1cef20ff004f247c505f60dc456349aaebb057/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/d335a4a6a86ff74d32deced58a2e37af5ee84d0e6965d6b97af01cbb03818085/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/e558aab6f31335fb08087dbc0b803eebce78407bd85db135b6df5ddb0b6b724d/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/38b17c5834591f05605d6e4479efe5dc5f1a61c7f96325202978dfb9b3a87ef1/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/dbe629524759c44e92ea3a0eb393fdea943195b42a9b527f77795ad6eac093da/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/d020e6db7545c9e09d8537e75210aca175fea8f8dd2d9870dab1f9eb37eeec24/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/5ae9c13ece4b281cd44c72e89f7c0a476a3ff29cbca16bdac1f50b579588c4ef/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/daa126a250dab4486e2aca4aa536e1d97d8675671a4c916938943ad319a585dd/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/4562d0526c5194c2286b0dd7f39a9ed2b9ac16f0be5c61f4f73742416b18a0b5/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/c02391bf4c3de60fc984b1c3ba0fa01ea026da623efcea80344fd9b05e935a82/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/60296e84b93496a350277233723b3a33fdb4668bb64e0ab00836e3e289b0f3f9/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/5bb5a39562496f577579acbfe906d8dbe922d603889b1adb607f6ae750df1b54/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/2af52d69e3324a8e9de0b6ade0b90d6865371a365a94388ea9debaed024dddd2/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/2241ec4f756f66804fb2802d5a0aa5149a76b7c6aff618417e9de3739c46052f/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/928e04d2aa24ca9b02266368a93c11cfc9b882abbc7b1d2cda2e87de7b9f47ed/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/d25943883b226108ade83c9adffe24518113b5e008e1c6bba4320299016ebeab/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/7eefaa3d195cc623f2f0da340718a94e0b7a0244bf2d48bb9bd5b09970bae89c/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/fb6457e5332b8048ae16a26f33bdd956ff6feee3358308fa69f310a0ae488557/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/d732509ee1ec684a1f911fef39b66850b713f0059c4bd72f73b2798140cf9d3e/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/eda685bc9e178f5a33850260ceeff5476ef0230a80d3ba113c8f90ae338ed01f/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/4e533a3703ca905378dffac2adf814dc16e36b7433dbf7032caca9e72166894e/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/a2d839845640c3b8162b00503bc3c0047e098035b45a53b6051a2f4dbd03a3c7/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/85da1dce87e2079aa686b83039237224c0e2a541b53207221be6938e66e9b2f3/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/71687e22427353e73fdefc3a7dbd50dced583f0d5a0443e452608f974f329272/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/a75671efa102ab095222608c92bd1dd2ff24d782e094b398c1f05a983af27ba9/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/6f50e1269c12b5236b99d71cf9130c6c63b509098778de52dda895284dd6954b/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/af8ca27a6bafc52fd603e7dcb8c98564bfb634c9bee4490547790005f39564b2/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/cbbeb6d12d2c961c9ff6e417f55653ed299531d248b086d1ddcf6d6572257cb5/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/1388ae1a33268aa6fb7393bb23bd23edddeee024511b61a81398efbf73c96e47/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/1c1500c0df7a6b888322fffa40573969cf69218b9ffcf04062c916fc0bab214a/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/294b0ace8561fb65386632c12f9042cb7ed91b1f12caa3c7567de096778d2889/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/f308041162736faa361aae5162cbcf177317e5135fd2a15cceeb925d5af940ee/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/a75a0c6f731011562b25d12b3db1240bbbb3be3e5d7034e90972e31704c76006/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/7c2ac38807a2e0bf51268d21df5cc0bea8df41ea7f5bdf03258e6ad59aa9278d/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/d967d745c74c3a649150e3b2c6f3dec0a9fb2ae11b48f58947f5daabc8076ac1/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/243721d529ca267f9fd6c13b763d493e3da0fc3127765c73aecba25f446e221f/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/869725d3c5085f404d0c23e2d8f276f0f3bf215ad8a82063232c0844e7b41d94/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/2a929032486d070d9edc4b93beed2c8f2374878d0525a2002b72c41de119ccd7/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/dd987487bc9f004af1c600e03374d40ca011c8053e11293030379601929d1b50/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/50ba7ded24c6fcac53f7e8dae38c873009a3335290470ea7c48b6e109d6dfb25/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/cd72f4b1d9f38bc256714642150d30b41080764c940e83e5501ad9c014ccbcc0/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/a9459e7c71f43a0f57c4c12cff2e13e54dc4c135700ad12d9633068ca55dde18/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/8435135332b40ce31f5853a6b448fcc830ae57db8f992d7957edde4e8c31fbb0/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/927939dafebbbc61969897d51a169f5f394bb4fa732dd94688a4452373bb7419/merged major:0 minor:94 fsType:overlay blockSize:0}] Feb 23 13:01:06.128079 master-0 kubenswrapper[7845]: I0223 13:01:06.126590 7845 manager.go:217] Machine: {Timestamp:2026-02-23 13:01:06.124369724 +0000 UTC m=+0.120100675 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1f5e0293a13e4ebabb9c281fe953e842 SystemUUID:1f5e0293-a13e-4eba-bb9c-281fe953e842 BootID:08350faf-787c-4da6-a444-e23ed90f1388 Filesystems:[{Device:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p DeviceMajor:0 DeviceMinor:239 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:266 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm DeviceMajor:0 DeviceMinor:152 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b053c311-07fd-45bb-ab10-6e7b76c9aa48/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:112 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:258 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm DeviceMajor:0 DeviceMinor:149 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx DeviceMajor:0 DeviceMinor:260 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7 DeviceMajor:0 DeviceMinor:111 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d DeviceMajor:0 DeviceMinor:147 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97 DeviceMajor:0 DeviceMinor:278 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f DeviceMajor:0 DeviceMinor:241 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l DeviceMajor:0 DeviceMinor:281 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2 DeviceMajor:0 DeviceMinor:240 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl DeviceMajor:0 DeviceMinor:110 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:264 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv DeviceMajor:0 DeviceMinor:277 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:77 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5 DeviceMajor:0 DeviceMinor:267 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf DeviceMajor:0 DeviceMinor:265 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:259 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q DeviceMajor:0 DeviceMinor:262 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf DeviceMajor:0 DeviceMinor:272 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr DeviceMajor:0 DeviceMinor:257 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:166 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2 DeviceMajor:0 DeviceMinor:280 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:143 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9 DeviceMajor:0 DeviceMinor:148 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht DeviceMajor:0 DeviceMinor:279 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h DeviceMajor:0 DeviceMinor:242 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r DeviceMajor:0 DeviceMinor:255 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c DeviceMajor:0 DeviceMinor:130 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f DeviceMajor:0 DeviceMinor:263 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n DeviceMajor:0 DeviceMinor:282 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0fecd2bc8223ea5 MacAddress:72:7d:a4:23:4b:a9 Speed:10000 Mtu:8900} {Name:11bfb3ba69318ac MacAddress:26:1c:02:ac:2d:bf Speed:10000 Mtu:8900} {Name:3379914a7286621 MacAddress:9e:96:4a:23:71:9a Speed:10000 Mtu:8900} {Name:33cac62afbdb095 MacAddress:be:07:d1:f7:07:6e Speed:10000 Mtu:8900} {Name:4344b3d3f6b6142 MacAddress:5a:af:a8:09:99:9d Speed:10000 Mtu:8900} {Name:497bca4205af77a MacAddress:ae:03:47:bf:d1:73 Speed:10000 Mtu:8900} {Name:5ca54e90d031d4b MacAddress:0e:41:4c:2f:65:c3 Speed:10000 Mtu:8900} {Name:6052e687d5a0ce7 MacAddress:b6:2f:08:f9:3d:9c Speed:10000 Mtu:8900} {Name:6a6904138e757c9 MacAddress:3e:51:ee:a7:97:d5 Speed:10000 Mtu:8900} {Name:7989d68762e9c6f MacAddress:66:d1:60:b1:11:f3 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:66:cc:a9:b3:d5:47 Speed:0 Mtu:8900} {Name:cf51deb148d0a54 MacAddress:32:1f:92:3c:80:a2 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fe:58:4c Speed:-1 Mtu:9000} {Name:ff4d0be1e1784bb MacAddress:5a:ca:81:0e:72:e4 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:42:b9:27:f4:5e:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 13:01:06.128079 master-0 kubenswrapper[7845]: I0223 13:01:06.128008 7845 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 13:01:06.128981 master-0 kubenswrapper[7845]: I0223 13:01:06.128308 7845 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 13:01:06.128981 master-0 kubenswrapper[7845]: I0223 13:01:06.128729 7845 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 13:01:06.129321 master-0 kubenswrapper[7845]: I0223 13:01:06.129221 7845 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 13:01:06.129681 master-0 kubenswrapper[7845]: I0223 13:01:06.129322 7845 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 13:01:06.129820 master-0 kubenswrapper[7845]: I0223 13:01:06.129703 7845 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 13:01:06.129820 master-0 kubenswrapper[7845]: I0223 13:01:06.129723 7845 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 13:01:06.129820 master-0 kubenswrapper[7845]: I0223 13:01:06.129740 7845 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:01:06.129820 master-0 kubenswrapper[7845]: I0223 13:01:06.129780 7845 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:01:06.130162 master-0 kubenswrapper[7845]: I0223 13:01:06.130135 7845 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:01:06.130412 master-0 kubenswrapper[7845]: I0223 13:01:06.130361 7845 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 13:01:06.130505 master-0 kubenswrapper[7845]: I0223 13:01:06.130468 7845 kubelet.go:418] "Attempting to sync node with API server" Feb 23 13:01:06.130505 master-0 kubenswrapper[7845]: I0223 13:01:06.130490 7845 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 13:01:06.130686 master-0 kubenswrapper[7845]: I0223 13:01:06.130515 7845 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 13:01:06.130686 master-0 kubenswrapper[7845]: I0223 13:01:06.130553 7845 kubelet.go:324] "Adding apiserver pod source" Feb 23 13:01:06.130686 master-0 kubenswrapper[7845]: I0223 13:01:06.130594 7845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 13:01:06.132390 master-0 kubenswrapper[7845]: I0223 13:01:06.132332 7845 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 13:01:06.132782 master-0 kubenswrapper[7845]: I0223 13:01:06.132724 7845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 13:01:06.133707 master-0 kubenswrapper[7845]: I0223 13:01:06.133658 7845 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 13:01:06.133944 master-0 kubenswrapper[7845]: I0223 13:01:06.133897 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 13:01:06.133944 master-0 kubenswrapper[7845]: I0223 13:01:06.133935 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.133950 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.133964 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.133976 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.133990 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134004 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134019 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134035 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134049 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134092 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 13:01:06.134120 master-0 kubenswrapper[7845]: I0223 13:01:06.134116 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 13:01:06.135038 master-0 kubenswrapper[7845]: I0223 13:01:06.134157 7845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 13:01:06.136292 master-0 kubenswrapper[7845]: I0223 13:01:06.135584 7845 server.go:1280] "Started kubelet" Feb 23 13:01:06.136292 master-0 kubenswrapper[7845]: I0223 13:01:06.135716 7845 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 13:01:06.136292 master-0 kubenswrapper[7845]: I0223 13:01:06.135778 7845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 13:01:06.136292 master-0 kubenswrapper[7845]: I0223 13:01:06.136003 7845 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 13:01:06.139778 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 13:01:06.149104 master-0 kubenswrapper[7845]: I0223 13:01:06.141073 7845 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.152837 7845 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.153499 7845 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.153663 7845 server.go:449] "Adding debug handlers to kubelet server" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.153955 7845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154004 7845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154030 7845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 09:18:20.452772997 +0000 UTC Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154087 7845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h17m14.298688702s for next certificate rotation Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154127 7845 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154140 7845 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.154220 7845 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.155745 7845 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.156018 7845 factory.go:55] Registering systemd factory Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.156045 7845 factory.go:221] Registration of the systemd container factory successfully Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.157545 7845 factory.go:153] Registering CRI-O factory Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.157589 7845 factory.go:221] Registration of the crio container factory successfully Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.157713 7845 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.157740 7845 factory.go:103] Registering Raw factory Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.157772 7845 manager.go:1196] Started watching for new ooms in manager Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.158438 7845 manager.go:319] Starting recovery of all containers Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159114 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159160 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159174 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44b07d33-6e84-434e-9a14-431846620968" volumeName="kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159186 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159198 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159210 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159219 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159230 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08577c3c-73d8-47f4-ba30-aec11af51d40" volumeName="kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159265 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159284 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159296 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159311 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159322 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159338 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159349 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="048f4455-d99a-407b-8674-60efc7aa6ecb" volumeName="kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159361 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d953c37-1b74-4ce5-89cb-b3f53454fc57" volumeName="kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159372 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159387 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da5d5997-e45f-4858-a9a9-e880bc222caf" volumeName="kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159397 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159484 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159502 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159515 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d953c37-1b74-4ce5-89cb-b3f53454fc57" volumeName="kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159526 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159536 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159549 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159561 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159574 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159586 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159599 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159610 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159623 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159662 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159675 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cbcca259-0dbf-48ca-bf90-eec638dcdd10" volumeName="kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159688 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159700 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159711 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee436961-c305-4c84-b4f9-175e1d8004fb" volumeName="kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159722 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159733 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159744 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159755 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159765 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159781 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159793 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159805 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159816 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159826 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85958edf-e3da-4704-8f09-cf049101f2e6" volumeName="kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159837 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159851 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b053c311-07fd-45bb-ab10-6e7b76c9aa48" volumeName="kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159864 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7fbab55-8405-44f4-ae2a-412c115ce411" volumeName="kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159877 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159889 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159900 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159918 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159930 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159941 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b7585f9f-12e5-451b-beeb-db43ae778f25" volumeName="kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159957 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159969 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159985 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.159999 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160013 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160025 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160036 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160050 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160061 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee436961-c305-4c84-b4f9-175e1d8004fb" volumeName="kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160075 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160091 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160103 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160116 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85958edf-e3da-4704-8f09-cf049101f2e6" volumeName="kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160134 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b053c311-07fd-45bb-ab10-6e7b76c9aa48" volumeName="kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160147 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160163 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160177 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160226 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160265 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160278 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160291 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160310 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160323 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="048f4455-d99a-407b-8674-60efc7aa6ecb" volumeName="kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160336 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160354 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160372 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160390 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cbcca259-0dbf-48ca-bf90-eec638dcdd10" volumeName="kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160401 7845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides" seLinuxMountContext="" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160416 7845 reconstruct.go:97] "Volume reconstruction finished" Feb 23 13:01:06.161683 master-0 kubenswrapper[7845]: I0223 13:01:06.160425 7845 reconciler.go:26] "Reconciler: start to sync state" Feb 23 13:01:06.168942 master-0 kubenswrapper[7845]: I0223 13:01:06.162772 7845 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 13:01:06.200112 master-0 kubenswrapper[7845]: I0223 13:01:06.199972 7845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 13:01:06.202504 master-0 kubenswrapper[7845]: I0223 13:01:06.202476 7845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 13:01:06.202708 master-0 kubenswrapper[7845]: I0223 13:01:06.202674 7845 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 13:01:06.202869 master-0 kubenswrapper[7845]: I0223 13:01:06.202849 7845 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 13:01:06.203067 master-0 kubenswrapper[7845]: E0223 13:01:06.203034 7845 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 13:01:06.204683 master-0 kubenswrapper[7845]: I0223 13:01:06.204651 7845 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:01:06.208663 master-0 kubenswrapper[7845]: I0223 13:01:06.208594 7845 generic.go:334] "Generic (PLEG): container finished" podID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerID="43e1e42f0f51b9501eada9df5600a37753dcd2c27cc6181d29c70a1a9b841cdd" exitCode=0 Feb 23 13:01:06.210778 master-0 kubenswrapper[7845]: I0223 13:01:06.210736 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 13:01:06.211194 master-0 kubenswrapper[7845]: I0223 13:01:06.211144 7845 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" exitCode=1 Feb 23 13:01:06.211194 master-0 kubenswrapper[7845]: I0223 13:01:06.211178 7845 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2" exitCode=0 Feb 23 13:01:06.219214 master-0 kubenswrapper[7845]: I0223 13:01:06.219152 7845 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db" exitCode=1 Feb 23 13:01:06.244799 master-0 kubenswrapper[7845]: I0223 13:01:06.244717 7845 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b" exitCode=0 Feb 23 13:01:06.250577 master-0 kubenswrapper[7845]: I0223 13:01:06.250531 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="2a70c0c29b6d30120d04b79d2da1e4abf09061bb5671dd422b5ce63244e7fbf8" exitCode=0 Feb 23 13:01:06.250577 master-0 kubenswrapper[7845]: I0223 13:01:06.250570 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d7c78d97c5c5cb888cf7f64ec84b51fa9486a9d5d5840d99c65981486e968902" exitCode=0 Feb 23 13:01:06.250577 master-0 kubenswrapper[7845]: I0223 13:01:06.250579 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="313dcd35e66618a3a3a009757d79bf6b3b9afb4f0c77e372c518f0c8a219ea2f" exitCode=0 Feb 23 13:01:06.250577 master-0 kubenswrapper[7845]: I0223 13:01:06.250587 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="aa169cb62afad633a7432fb996d7a5e8546ab3591767d1cbb4ee55535e914204" exitCode=0 Feb 23 13:01:06.250870 master-0 kubenswrapper[7845]: I0223 13:01:06.250596 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d363f0290cd5f73712e4ac4fe33436a5021a7548f84e19592e8c13df6abe2ebb" exitCode=0 Feb 23 13:01:06.250870 master-0 kubenswrapper[7845]: I0223 13:01:06.250606 7845 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="a490aeb54094c79e65d9b093b1d71d57a70012d976fefb24957c763212ff701d" exitCode=0 Feb 23 13:01:06.258025 master-0 kubenswrapper[7845]: I0223 13:01:06.257978 7845 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" containerID="860a9e244b04d91c3a33beb656c339e8751b53849a1636cd6eb8994e31e07960" exitCode=0 Feb 23 13:01:06.261184 master-0 kubenswrapper[7845]: I0223 13:01:06.261150 7845 generic.go:334] "Generic (PLEG): container finished" podID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerID="db83ef82ac155acc22a9f418d8c50d6b04cf844595b5d8cd37f345df9398fd8f" exitCode=0 Feb 23 13:01:06.303286 master-0 kubenswrapper[7845]: E0223 13:01:06.303236 7845 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:01:06.316719 master-0 kubenswrapper[7845]: I0223 13:01:06.316680 7845 manager.go:324] Recovery completed Feb 23 13:01:06.349106 master-0 kubenswrapper[7845]: I0223 13:01:06.349044 7845 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 13:01:06.349106 master-0 kubenswrapper[7845]: I0223 13:01:06.349074 7845 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 13:01:06.349106 master-0 kubenswrapper[7845]: I0223 13:01:06.349108 7845 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:01:06.349437 master-0 kubenswrapper[7845]: I0223 13:01:06.349388 7845 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 23 13:01:06.349437 master-0 kubenswrapper[7845]: I0223 13:01:06.349409 7845 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 23 13:01:06.349550 master-0 kubenswrapper[7845]: I0223 13:01:06.349441 7845 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 23 13:01:06.349550 master-0 kubenswrapper[7845]: I0223 13:01:06.349456 7845 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 23 13:01:06.349550 master-0 kubenswrapper[7845]: I0223 13:01:06.349468 7845 policy_none.go:49] "None policy: Start" Feb 23 13:01:06.351868 master-0 kubenswrapper[7845]: I0223 13:01:06.351798 7845 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 13:01:06.351959 master-0 kubenswrapper[7845]: I0223 13:01:06.351900 7845 state_mem.go:35] "Initializing new in-memory state store" Feb 23 13:01:06.352478 master-0 kubenswrapper[7845]: I0223 13:01:06.352425 7845 state_mem.go:75] "Updated machine memory state" Feb 23 13:01:06.352478 master-0 kubenswrapper[7845]: I0223 13:01:06.352468 7845 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 23 13:01:06.367224 master-0 kubenswrapper[7845]: I0223 13:01:06.367174 7845 manager.go:334] "Starting Device Plugin manager" Feb 23 13:01:06.367463 master-0 kubenswrapper[7845]: I0223 13:01:06.367235 7845 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 13:01:06.367463 master-0 kubenswrapper[7845]: I0223 13:01:06.367288 7845 server.go:79] "Starting device plugin registration server" Feb 23 13:01:06.368805 master-0 kubenswrapper[7845]: I0223 13:01:06.367880 7845 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 13:01:06.368805 master-0 kubenswrapper[7845]: I0223 13:01:06.367904 7845 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 13:01:06.368805 master-0 kubenswrapper[7845]: I0223 13:01:06.368136 7845 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 13:01:06.368805 master-0 kubenswrapper[7845]: I0223 13:01:06.368257 7845 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 13:01:06.368805 master-0 kubenswrapper[7845]: I0223 13:01:06.368267 7845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 13:01:06.468145 master-0 kubenswrapper[7845]: I0223 13:01:06.468049 7845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:01:06.474308 master-0 kubenswrapper[7845]: I0223 13:01:06.473742 7845 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:01:06.474308 master-0 kubenswrapper[7845]: I0223 13:01:06.473807 7845 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:01:06.474308 master-0 kubenswrapper[7845]: I0223 13:01:06.473823 7845 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:01:06.474308 master-0 kubenswrapper[7845]: I0223 13:01:06.473897 7845 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:01:06.487215 master-0 kubenswrapper[7845]: I0223 13:01:06.487151 7845 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 23 13:01:06.487365 master-0 kubenswrapper[7845]: I0223 13:01:06.487341 7845 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 13:01:06.503953 master-0 kubenswrapper[7845]: I0223 13:01:06.503850 7845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Feb 23 13:01:06.504649 master-0 kubenswrapper[7845]: I0223 13:01:06.504588 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b" Feb 23 13:01:06.504810 master-0 kubenswrapper[7845]: I0223 13:01:06.504648 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"0bb705c5c9f04251f2f3ae5ef9f44d40f3c6c1b144c3946a4cd25703a7f7278f"} Feb 23 13:01:06.504810 master-0 kubenswrapper[7845]: I0223 13:01:06.504755 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd"} Feb 23 13:01:06.504810 master-0 kubenswrapper[7845]: I0223 13:01:06.504789 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504816 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504843 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504869 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504893 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504918 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.504985 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ce530cb0a173a9b0ff41cae30abeb84b3d356a15907fb440c631cf7fbea736" Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.505010 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.505037 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021"} Feb 23 13:01:06.505053 master-0 kubenswrapper[7845]: I0223 13:01:06.505062 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b2243c1b0e1a884637ce32ff21a340a8fd2d151e689c0ac21c3f49c0279d57f8"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505086 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b58d0f68f1bce11a0ca3232dc9f5a8f1bbd2f9babb595ae60e80f32714fa923e"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505109 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"986ae970a2c0750329313ea9f039e9fe0804cca7630dc137dcff229019ea869e"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505158 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505193 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505218 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505283 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c"} Feb 23 13:01:06.505734 master-0 kubenswrapper[7845]: I0223 13:01:06.505409 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aae3e10927df5e25b43b0ec4577a806fa88e6da8d69640506c1023ac0726cd4" Feb 23 13:01:06.521179 master-0 kubenswrapper[7845]: E0223 13:01:06.521101 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.521442 master-0 kubenswrapper[7845]: E0223 13:01:06.521277 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.522295 master-0 kubenswrapper[7845]: W0223 13:01:06.522201 7845 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 13:01:06.522386 master-0 kubenswrapper[7845]: E0223 13:01:06.522309 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.522386 master-0 kubenswrapper[7845]: E0223 13:01:06.522317 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.522572 master-0 kubenswrapper[7845]: E0223 13:01:06.522377 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.564664 master-0 kubenswrapper[7845]: I0223 13:01:06.564598 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.564796 master-0 kubenswrapper[7845]: I0223 13:01:06.564643 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.564796 master-0 kubenswrapper[7845]: I0223 13:01:06.564703 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.564796 master-0 kubenswrapper[7845]: I0223 13:01:06.564721 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.564796 master-0 kubenswrapper[7845]: I0223 13:01:06.564740 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565022 master-0 kubenswrapper[7845]: I0223 13:01:06.564815 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565022 master-0 kubenswrapper[7845]: I0223 13:01:06.564875 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.565022 master-0 kubenswrapper[7845]: I0223 13:01:06.564942 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565022 master-0 kubenswrapper[7845]: I0223 13:01:06.564994 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565235 master-0 kubenswrapper[7845]: I0223 13:01:06.565040 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.565235 master-0 kubenswrapper[7845]: I0223 13:01:06.565081 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.565235 master-0 kubenswrapper[7845]: I0223 13:01:06.565116 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565478 master-0 kubenswrapper[7845]: I0223 13:01:06.565232 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.565478 master-0 kubenswrapper[7845]: I0223 13:01:06.565349 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.565478 master-0 kubenswrapper[7845]: I0223 13:01:06.565397 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.565478 master-0 kubenswrapper[7845]: I0223 13:01:06.565434 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.565478 master-0 kubenswrapper[7845]: I0223 13:01:06.565471 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.666088 master-0 kubenswrapper[7845]: I0223 13:01:06.665986 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.666422 master-0 kubenswrapper[7845]: I0223 13:01:06.666197 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.666422 master-0 kubenswrapper[7845]: I0223 13:01:06.666388 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.666658 master-0 kubenswrapper[7845]: I0223 13:01:06.666458 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.666658 master-0 kubenswrapper[7845]: I0223 13:01:06.666507 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.666658 master-0 kubenswrapper[7845]: I0223 13:01:06.666552 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.666658 master-0 kubenswrapper[7845]: I0223 13:01:06.666594 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.666658 master-0 kubenswrapper[7845]: I0223 13:01:06.666639 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666685 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666730 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666779 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666830 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666871 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666910 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666952 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.666996 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.667039 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.667121 master-0 kubenswrapper[7845]: I0223 13:01:06.667081 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667176 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667277 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667357 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667431 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667499 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667569 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667634 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667733 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667801 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.667941 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:06.668066 master-0 kubenswrapper[7845]: I0223 13:01:06.668017 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.669307 master-0 kubenswrapper[7845]: I0223 13:01:06.668084 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.669307 master-0 kubenswrapper[7845]: I0223 13:01:06.668147 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:06.669307 master-0 kubenswrapper[7845]: I0223 13:01:06.668219 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:06.669307 master-0 kubenswrapper[7845]: I0223 13:01:06.668323 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:06.669307 master-0 kubenswrapper[7845]: I0223 13:01:06.668396 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:07.131854 master-0 kubenswrapper[7845]: I0223 13:01:07.131727 7845 apiserver.go:52] "Watching apiserver" Feb 23 13:01:07.146971 master-0 kubenswrapper[7845]: I0223 13:01:07.146832 7845 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:01:07.148582 master-0 kubenswrapper[7845]: I0223 13:01:07.148473 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-mtn6f","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924","openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7","openshift-config-operator/openshift-config-operator-6f47d587d6-p5488","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-network-operator/iptables-alerter-qd2ns","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx","openshift-dns-operator/dns-operator-8c7d49845-7466r","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp","openshift-ingress-operator/ingress-operator-6569778c84-gswst","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74","openshift-etcd/etcd-master-0-master-0","openshift-network-operator/network-operator-7d7db75979-rmsq8","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h","kube-system/bootstrap-kube-controller-manager-master-0","openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j","openshift-multus/multus-additional-cni-plugins-f7cf9","openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp","kube-system/bootstrap-kube-scheduler-master-0","openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8","openshift-network-diagnostics/network-check-target-shl6r","openshift-network-node-identity/network-node-identity-4wvxd","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86","openshift-marketplace/marketplace-operator-6f5488b997-28zcz","openshift-multus/multus-rmz8z","openshift-multus/network-metrics-daemon-kq2rk","openshift-ovn-kubernetes/ovnkube-node-45ncb"] Feb 23 13:01:07.149304 master-0 kubenswrapper[7845]: I0223 13:01:07.149210 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.153138 master-0 kubenswrapper[7845]: I0223 13:01:07.153057 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:01:07.153542 master-0 kubenswrapper[7845]: I0223 13:01:07.153465 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:01:07.153661 master-0 kubenswrapper[7845]: I0223 13:01:07.153600 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:01:07.153856 master-0 kubenswrapper[7845]: I0223 13:01:07.153809 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:01:07.154312 master-0 kubenswrapper[7845]: I0223 13:01:07.154237 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.154698 master-0 kubenswrapper[7845]: I0223 13:01:07.154572 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.155473 master-0 kubenswrapper[7845]: I0223 13:01:07.155411 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:01:07.155665 master-0 kubenswrapper[7845]: I0223 13:01:07.155486 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:01:07.158057 master-0 kubenswrapper[7845]: I0223 13:01:07.157968 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 13:01:07.158684 master-0 kubenswrapper[7845]: I0223 13:01:07.158589 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.163188 master-0 kubenswrapper[7845]: I0223 13:01:07.163114 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.163493 master-0 kubenswrapper[7845]: I0223 13:01:07.163411 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.164111 master-0 kubenswrapper[7845]: I0223 13:01:07.164043 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.167094 master-0 kubenswrapper[7845]: I0223 13:01:07.167030 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:07.167214 master-0 kubenswrapper[7845]: I0223 13:01:07.167153 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:01:07.167436 master-0 kubenswrapper[7845]: I0223 13:01:07.167390 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:01:07.168079 master-0 kubenswrapper[7845]: I0223 13:01:07.168006 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.169090 master-0 kubenswrapper[7845]: I0223 13:01:07.169020 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.170881 master-0 kubenswrapper[7845]: I0223 13:01:07.170797 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:07.171023 master-0 kubenswrapper[7845]: I0223 13:01:07.170993 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:07.172606 master-0 kubenswrapper[7845]: I0223 13:01:07.172464 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:01:07.172894 master-0 kubenswrapper[7845]: I0223 13:01:07.172834 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:01:07.173012 master-0 kubenswrapper[7845]: I0223 13:01:07.172947 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:01:07.173133 master-0 kubenswrapper[7845]: I0223 13:01:07.173096 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:01:07.173617 master-0 kubenswrapper[7845]: I0223 13:01:07.173542 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:07.174430 master-0 kubenswrapper[7845]: I0223 13:01:07.174356 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:07.174569 master-0 kubenswrapper[7845]: I0223 13:01:07.174459 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.174569 master-0 kubenswrapper[7845]: I0223 13:01:07.174524 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:07.174738 master-0 kubenswrapper[7845]: I0223 13:01:07.174577 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.174738 master-0 kubenswrapper[7845]: I0223 13:01:07.174642 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.174738 master-0 kubenswrapper[7845]: I0223 13:01:07.174695 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.175103 master-0 kubenswrapper[7845]: I0223 13:01:07.174743 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:07.175103 master-0 kubenswrapper[7845]: I0223 13:01:07.174860 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.175103 master-0 kubenswrapper[7845]: I0223 13:01:07.174931 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.175103 master-0 kubenswrapper[7845]: I0223 13:01:07.174995 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.175103 master-0 kubenswrapper[7845]: I0223 13:01:07.175047 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175102 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175193 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175286 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175381 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175442 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175492 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175546 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175598 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.175700 master-0 kubenswrapper[7845]: I0223 13:01:07.175648 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.175714 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.175770 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.175902 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.175961 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176009 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176097 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176185 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176339 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176401 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:07.176532 master-0 kubenswrapper[7845]: I0223 13:01:07.176458 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176519 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176639 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176712 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176745 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176766 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176826 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176848 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176859 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176919 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.176746 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:01:07.177435 master-0 kubenswrapper[7845]: I0223 13:01:07.177355 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.177534 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.177604 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.177811 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.175496 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.177991 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.178010 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:01:07.178401 master-0 kubenswrapper[7845]: I0223 13:01:07.178348 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178489 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178596 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178662 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178722 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178750 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178793 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178871 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178753 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.179048 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.178916 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.179263 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.179001 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.179821 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180013 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180160 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180521 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180547 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180637 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180685 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180773 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180876 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180881 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180896 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.180928 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.181033 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.181138 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.181223 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.181280 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.181323 master-0 kubenswrapper[7845]: I0223 13:01:07.181334 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.181224 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.181446 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182533 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182575 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182625 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182784 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182844 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.182922 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.180949 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183137 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183346 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183447 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183472 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183634 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183512 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183803 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.183991 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184072 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184201 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184300 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184347 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184446 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184516 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184538 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184577 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184616 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184633 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184684 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184737 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184691 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184783 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184785 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184885 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.184762 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.185806 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188456 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188479 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188520 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188512 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188577 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.185183 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188810 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188839 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188897 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.188067 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.189431 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.189617 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.190191 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.190705 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.191495 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.191962 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.192643 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.192804 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193068 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193351 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193446 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193517 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193566 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193623 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193676 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193736 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193792 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193835 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.193907 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194061 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194182 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194470 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194485 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194579 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194635 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194705 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194766 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194998 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.195105 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.195541 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.195806 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.194795 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.196040 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.196353 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.198586 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.198646 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.200181 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.200706 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.200769 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.201043 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:01:07.201580 master-0 kubenswrapper[7845]: I0223 13:01:07.201082 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.203973 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.204168 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.204420 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.204575 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.204834 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.204976 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.205059 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.205098 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.205271 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.205506 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.206615 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:01:07.209440 master-0 kubenswrapper[7845]: I0223 13:01:07.207131 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:01:07.215868 master-0 kubenswrapper[7845]: I0223 13:01:07.215741 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.217164 master-0 kubenswrapper[7845]: I0223 13:01:07.217102 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:01:07.219546 master-0 kubenswrapper[7845]: I0223 13:01:07.219447 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.223848 master-0 kubenswrapper[7845]: I0223 13:01:07.223647 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.223848 master-0 kubenswrapper[7845]: I0223 13:01:07.223749 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:01:07.223848 master-0 kubenswrapper[7845]: I0223 13:01:07.223793 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.224471 master-0 kubenswrapper[7845]: I0223 13:01:07.224415 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.227028 master-0 kubenswrapper[7845]: I0223 13:01:07.226984 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:07.236726 master-0 kubenswrapper[7845]: I0223 13:01:07.236650 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:01:07.239415 master-0 kubenswrapper[7845]: I0223 13:01:07.239361 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:01:07.240446 master-0 kubenswrapper[7845]: I0223 13:01:07.240397 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:01:07.241373 master-0 kubenswrapper[7845]: I0223 13:01:07.241308 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.243489 master-0 kubenswrapper[7845]: I0223 13:01:07.243293 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:01:07.243594 master-0 kubenswrapper[7845]: I0223 13:01:07.243486 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:01:07.245500 master-0 kubenswrapper[7845]: I0223 13:01:07.245293 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.251721 master-0 kubenswrapper[7845]: I0223 13:01:07.251683 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.255029 master-0 kubenswrapper[7845]: I0223 13:01:07.254755 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:01:07.257042 master-0 kubenswrapper[7845]: I0223 13:01:07.256866 7845 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 13:01:07.274553 master-0 kubenswrapper[7845]: I0223 13:01:07.274495 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:01:07.301783 master-0 kubenswrapper[7845]: I0223 13:01:07.301698 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:07.301783 master-0 kubenswrapper[7845]: I0223 13:01:07.301775 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.301803 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.301846 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.301898 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.301937 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.301976 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.302147 master-0 kubenswrapper[7845]: I0223 13:01:07.302050 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.302483 master-0 kubenswrapper[7845]: I0223 13:01:07.302149 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.302483 master-0 kubenswrapper[7845]: I0223 13:01:07.302272 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:07.302483 master-0 kubenswrapper[7845]: E0223 13:01:07.302320 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:07.302483 master-0 kubenswrapper[7845]: I0223 13:01:07.302453 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.302655 master-0 kubenswrapper[7845]: E0223 13:01:07.302634 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.802605409 +0000 UTC m=+1.798336310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:07.303043 master-0 kubenswrapper[7845]: I0223 13:01:07.302969 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.303043 master-0 kubenswrapper[7845]: I0223 13:01:07.302980 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.303440 master-0 kubenswrapper[7845]: I0223 13:01:07.303052 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.303440 master-0 kubenswrapper[7845]: I0223 13:01:07.303208 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.303440 master-0 kubenswrapper[7845]: I0223 13:01:07.303290 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.303610 master-0 kubenswrapper[7845]: I0223 13:01:07.303498 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.303610 master-0 kubenswrapper[7845]: I0223 13:01:07.303538 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.303610 master-0 kubenswrapper[7845]: I0223 13:01:07.303572 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303610 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303648 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303657 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303681 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303747 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303781 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303815 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303844 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303938 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: E0223 13:01:07.303961 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.303944 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: E0223 13:01:07.304113 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.804084894 +0000 UTC m=+1.799815795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.304276 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.304412 master-0 kubenswrapper[7845]: I0223 13:01:07.304402 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.304515 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.304582 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.304738 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.304780 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.304879 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.305044 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.305164 master-0 kubenswrapper[7845]: I0223 13:01:07.305105 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: E0223 13:01:07.305200 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305212 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: E0223 13:01:07.305281 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.805233978 +0000 UTC m=+1.800964889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305514 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305595 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305612 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305724 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305774 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305850 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: E0223 13:01:07.305883 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: I0223 13:01:07.305907 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.306089 master-0 kubenswrapper[7845]: E0223 13:01:07.306023 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.805987491 +0000 UTC m=+1.801718402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306122 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306200 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306208 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: E0223 13:01:07.306370 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: E0223 13:01:07.306434 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.806416174 +0000 UTC m=+1.802147085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306485 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306559 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306607 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306636 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306648 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: E0223 13:01:07.306773 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:07.306789 master-0 kubenswrapper[7845]: I0223 13:01:07.306768 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: E0223 13:01:07.306864 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.806834176 +0000 UTC m=+1.802565077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.306901 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307032 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307285 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307354 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307435 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307470 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307573 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307619 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307658 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.307706 master-0 kubenswrapper[7845]: I0223 13:01:07.307704 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.307713 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.307785 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: E0223 13:01:07.307808 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.307824 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: E0223 13:01:07.307865 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.807847207 +0000 UTC m=+1.803578118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.307922 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.307965 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308020 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308084 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308151 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308159 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308216 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308367 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308402 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308441 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.308495 master-0 kubenswrapper[7845]: I0223 13:01:07.308439 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.308519 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.308564 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.308583 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.308998 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309127 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309160 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309317 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309366 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: E0223 13:01:07.309505 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309530 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: E0223 13:01:07.309592 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.809567248 +0000 UTC m=+1.805298149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309612 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309654 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309802 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.309884 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.310000 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.310271 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: E0223 13:01:07.310313 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: I0223 13:01:07.310388 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.310534 master-0 kubenswrapper[7845]: E0223 13:01:07.310495 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.810475855 +0000 UTC m=+1.806206756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:07.312849 master-0 kubenswrapper[7845]: I0223 13:01:07.311168 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.315543 master-0 kubenswrapper[7845]: I0223 13:01:07.315504 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:01:07.321721 master-0 kubenswrapper[7845]: I0223 13:01:07.321648 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.351784 master-0 kubenswrapper[7845]: I0223 13:01:07.351662 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:01:07.354812 master-0 kubenswrapper[7845]: I0223 13:01:07.354585 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:01:07.365608 master-0 kubenswrapper[7845]: I0223 13:01:07.365541 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.396157 master-0 kubenswrapper[7845]: I0223 13:01:07.396010 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:01:07.412465 master-0 kubenswrapper[7845]: I0223 13:01:07.412365 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.412613 master-0 kubenswrapper[7845]: I0223 13:01:07.412539 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.412678 master-0 kubenswrapper[7845]: I0223 13:01:07.412618 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.412678 master-0 kubenswrapper[7845]: I0223 13:01:07.412357 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.412862 master-0 kubenswrapper[7845]: I0223 13:01:07.412688 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.412975 master-0 kubenswrapper[7845]: I0223 13:01:07.412899 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.413084 master-0 kubenswrapper[7845]: I0223 13:01:07.412899 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.413084 master-0 kubenswrapper[7845]: I0223 13:01:07.413055 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.413333 master-0 kubenswrapper[7845]: I0223 13:01:07.413106 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.413333 master-0 kubenswrapper[7845]: I0223 13:01:07.413140 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.414063 master-0 kubenswrapper[7845]: I0223 13:01:07.413838 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414063 master-0 kubenswrapper[7845]: I0223 13:01:07.413988 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414433 master-0 kubenswrapper[7845]: I0223 13:01:07.414371 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.414531 master-0 kubenswrapper[7845]: I0223 13:01:07.414454 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414531 master-0 kubenswrapper[7845]: I0223 13:01:07.414500 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414709 master-0 kubenswrapper[7845]: I0223 13:01:07.414587 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.414709 master-0 kubenswrapper[7845]: I0223 13:01:07.414624 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414709 master-0 kubenswrapper[7845]: I0223 13:01:07.414654 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414904 master-0 kubenswrapper[7845]: I0223 13:01:07.414743 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414904 master-0 kubenswrapper[7845]: I0223 13:01:07.414757 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.414904 master-0 kubenswrapper[7845]: I0223 13:01:07.414820 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415124 master-0 kubenswrapper[7845]: I0223 13:01:07.414928 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415124 master-0 kubenswrapper[7845]: I0223 13:01:07.414982 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.415124 master-0 kubenswrapper[7845]: I0223 13:01:07.415038 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:07.415124 master-0 kubenswrapper[7845]: I0223 13:01:07.415088 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.415124 master-0 kubenswrapper[7845]: I0223 13:01:07.415108 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: I0223 13:01:07.415144 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: I0223 13:01:07.415273 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: E0223 13:01:07.415311 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: I0223 13:01:07.415350 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: E0223 13:01:07.415383 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.915358903 +0000 UTC m=+1.911089814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: I0223 13:01:07.415419 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.415506 master-0 kubenswrapper[7845]: I0223 13:01:07.415474 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.415915 master-0 kubenswrapper[7845]: I0223 13:01:07.415527 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415915 master-0 kubenswrapper[7845]: I0223 13:01:07.415615 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:07.415915 master-0 kubenswrapper[7845]: I0223 13:01:07.415695 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415915 master-0 kubenswrapper[7845]: I0223 13:01:07.415753 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.415915 master-0 kubenswrapper[7845]: I0223 13:01:07.415777 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: E0223 13:01:07.415967 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: E0223 13:01:07.416062 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.916006872 +0000 UTC m=+1.911737783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: I0223 13:01:07.416162 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: I0223 13:01:07.416238 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: I0223 13:01:07.416407 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: I0223 13:01:07.416515 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.416632 master-0 kubenswrapper[7845]: I0223 13:01:07.416589 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417106 master-0 kubenswrapper[7845]: I0223 13:01:07.416663 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.417106 master-0 kubenswrapper[7845]: I0223 13:01:07.416767 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417106 master-0 kubenswrapper[7845]: I0223 13:01:07.416880 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.417106 master-0 kubenswrapper[7845]: I0223 13:01:07.417058 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417374 master-0 kubenswrapper[7845]: I0223 13:01:07.417139 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.417374 master-0 kubenswrapper[7845]: I0223 13:01:07.417230 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417493 master-0 kubenswrapper[7845]: I0223 13:01:07.417390 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:07.417493 master-0 kubenswrapper[7845]: I0223 13:01:07.417431 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417610 master-0 kubenswrapper[7845]: I0223 13:01:07.417521 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.417610 master-0 kubenswrapper[7845]: I0223 13:01:07.417594 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.417727 master-0 kubenswrapper[7845]: I0223 13:01:07.417668 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.417818 master-0 kubenswrapper[7845]: I0223 13:01:07.417768 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.417970 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.418066 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.418145 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.418225 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.418232 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418360 master-0 kubenswrapper[7845]: I0223 13:01:07.418299 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418369 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418512 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418693 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418769 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418811 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.418888 master-0 kubenswrapper[7845]: I0223 13:01:07.418865 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.418918 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.418960 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419030 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419090 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419142 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: E0223 13:01:07.419296 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419329 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: E0223 13:01:07.419355 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:07.919333292 +0000 UTC m=+1.915064193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419384 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:07.419462 master-0 kubenswrapper[7845]: I0223 13:01:07.419460 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:07.440044 master-0 kubenswrapper[7845]: I0223 13:01:07.439896 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.456875 master-0 kubenswrapper[7845]: I0223 13:01:07.455466 7845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:01:07.461875 master-0 kubenswrapper[7845]: I0223 13:01:07.461651 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.489435 master-0 kubenswrapper[7845]: I0223 13:01:07.489356 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.494766 master-0 kubenswrapper[7845]: I0223 13:01:07.494634 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.519809 master-0 kubenswrapper[7845]: I0223 13:01:07.519709 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.529433 master-0 kubenswrapper[7845]: I0223 13:01:07.529351 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:01:07.558370 master-0 kubenswrapper[7845]: I0223 13:01:07.558239 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:01:07.569200 master-0 kubenswrapper[7845]: I0223 13:01:07.569136 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:01:07.594076 master-0 kubenswrapper[7845]: I0223 13:01:07.594001 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:01:07.618031 master-0 kubenswrapper[7845]: I0223 13:01:07.617960 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.825472 master-0 kubenswrapper[7845]: I0223 13:01:07.825404 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:07.825805 master-0 kubenswrapper[7845]: E0223 13:01:07.825677 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:07.825805 master-0 kubenswrapper[7845]: E0223 13:01:07.825794 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.825761687 +0000 UTC m=+2.821492598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:07.826005 master-0 kubenswrapper[7845]: I0223 13:01:07.825871 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:07.826005 master-0 kubenswrapper[7845]: I0223 13:01:07.825951 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:07.826129 master-0 kubenswrapper[7845]: E0223 13:01:07.826093 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:07.826191 master-0 kubenswrapper[7845]: E0223 13:01:07.826136 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.826122138 +0000 UTC m=+2.821853049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:07.826323 master-0 kubenswrapper[7845]: I0223 13:01:07.826279 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.826448 master-0 kubenswrapper[7845]: E0223 13:01:07.826318 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:07.826448 master-0 kubenswrapper[7845]: I0223 13:01:07.826420 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:07.826579 master-0 kubenswrapper[7845]: E0223 13:01:07.826485 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.826422337 +0000 UTC m=+2.822153238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:07.826579 master-0 kubenswrapper[7845]: E0223 13:01:07.826519 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:07.826579 master-0 kubenswrapper[7845]: E0223 13:01:07.826572 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.826557761 +0000 UTC m=+2.822288672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:07.826765 master-0 kubenswrapper[7845]: I0223 13:01:07.826681 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:07.826765 master-0 kubenswrapper[7845]: E0223 13:01:07.826705 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:07.826880 master-0 kubenswrapper[7845]: E0223 13:01:07.826800 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.826783458 +0000 UTC m=+2.822514359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:07.826880 master-0 kubenswrapper[7845]: E0223 13:01:07.826806 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:07.826880 master-0 kubenswrapper[7845]: E0223 13:01:07.826849 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.82683458 +0000 UTC m=+2.822565481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:07.826880 master-0 kubenswrapper[7845]: I0223 13:01:07.826736 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:07.827110 master-0 kubenswrapper[7845]: E0223 13:01:07.826919 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:07.827110 master-0 kubenswrapper[7845]: I0223 13:01:07.826969 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:07.827110 master-0 kubenswrapper[7845]: E0223 13:01:07.826981 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.826962503 +0000 UTC m=+2.822693624 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:07.827110 master-0 kubenswrapper[7845]: E0223 13:01:07.827060 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:07.827110 master-0 kubenswrapper[7845]: E0223 13:01:07.827100 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.827088537 +0000 UTC m=+2.822819698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:07.827413 master-0 kubenswrapper[7845]: I0223 13:01:07.827119 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:07.827413 master-0 kubenswrapper[7845]: E0223 13:01:07.827268 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:07.827413 master-0 kubenswrapper[7845]: E0223 13:01:07.827337 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.827316754 +0000 UTC m=+2.823047865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:07.928291 master-0 kubenswrapper[7845]: I0223 13:01:07.928196 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:07.928549 master-0 kubenswrapper[7845]: I0223 13:01:07.928313 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:07.928549 master-0 kubenswrapper[7845]: I0223 13:01:07.928395 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.928768 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.928898 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.928868181 +0000 UTC m=+2.924599092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.928946 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.928993 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.929018 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.928997305 +0000 UTC m=+2.924728206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:07.929056 master-0 kubenswrapper[7845]: E0223 13:01:07.929043 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:08.929029946 +0000 UTC m=+2.924760857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:08.447452 master-0 kubenswrapper[7845]: W0223 13:01:08.444863 7845 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 23 13:01:08.447452 master-0 kubenswrapper[7845]: E0223 13:01:08.445015 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:08.447452 master-0 kubenswrapper[7845]: E0223 13:01:08.445472 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:01:08.447452 master-0 kubenswrapper[7845]: E0223 13:01:08.446335 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:01:08.447452 master-0 kubenswrapper[7845]: E0223 13:01:08.447200 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:01:08.449042 master-0 kubenswrapper[7845]: E0223 13:01:08.447893 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:08.479450 master-0 kubenswrapper[7845]: I0223 13:01:08.463138 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:01:08.479450 master-0 kubenswrapper[7845]: I0223 13:01:08.475919 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:08.479450 master-0 kubenswrapper[7845]: I0223 13:01:08.477809 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:01:08.480925 master-0 kubenswrapper[7845]: I0223 13:01:08.480871 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:01:08.484279 master-0 kubenswrapper[7845]: I0223 13:01:08.484198 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:01:08.484979 master-0 kubenswrapper[7845]: I0223 13:01:08.484935 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:08.488424 master-0 kubenswrapper[7845]: I0223 13:01:08.488369 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:08.491001 master-0 kubenswrapper[7845]: I0223 13:01:08.490920 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:08.494067 master-0 kubenswrapper[7845]: I0223 13:01:08.493905 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:08.495617 master-0 kubenswrapper[7845]: I0223 13:01:08.495009 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:08.496583 master-0 kubenswrapper[7845]: I0223 13:01:08.496541 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:01:08.496952 master-0 kubenswrapper[7845]: I0223 13:01:08.496915 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:01:08.498118 master-0 kubenswrapper[7845]: I0223 13:01:08.498085 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:01:08.503906 master-0 kubenswrapper[7845]: I0223 13:01:08.503832 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:01:08.506491 master-0 kubenswrapper[7845]: I0223 13:01:08.506266 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:01:08.510164 master-0 kubenswrapper[7845]: I0223 13:01:08.510112 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:01:08.514389 master-0 kubenswrapper[7845]: I0223 13:01:08.514325 7845 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 13:01:08.522744 master-0 kubenswrapper[7845]: I0223 13:01:08.522656 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:08.675632 master-0 kubenswrapper[7845]: I0223 13:01:08.675587 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.846572 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847040 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847080 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847114 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847139 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847166 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847194 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:08.847208 master-0 kubenswrapper[7845]: I0223 13:01:08.847215 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:08.847686 master-0 kubenswrapper[7845]: I0223 13:01:08.847264 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:08.847686 master-0 kubenswrapper[7845]: E0223 13:01:08.847417 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:08.847686 master-0 kubenswrapper[7845]: E0223 13:01:08.847491 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.847473336 +0000 UTC m=+4.843204207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:08.847823 master-0 kubenswrapper[7845]: E0223 13:01:08.847790 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:08.847859 master-0 kubenswrapper[7845]: E0223 13:01:08.847827 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.847816936 +0000 UTC m=+4.843547797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:08.847900 master-0 kubenswrapper[7845]: E0223 13:01:08.847869 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:08.847900 master-0 kubenswrapper[7845]: E0223 13:01:08.847887 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.847882588 +0000 UTC m=+4.843613459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:08.847970 master-0 kubenswrapper[7845]: E0223 13:01:08.847919 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:08.847970 master-0 kubenswrapper[7845]: E0223 13:01:08.847939 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.847933619 +0000 UTC m=+4.843664480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:08.848027 master-0 kubenswrapper[7845]: E0223 13:01:08.847975 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:08.848027 master-0 kubenswrapper[7845]: E0223 13:01:08.847992 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.847987571 +0000 UTC m=+4.843718442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:08.848027 master-0 kubenswrapper[7845]: E0223 13:01:08.848024 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:08.848121 master-0 kubenswrapper[7845]: E0223 13:01:08.848041 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.848036172 +0000 UTC m=+4.843767043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:08.848121 master-0 kubenswrapper[7845]: E0223 13:01:08.848092 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:08.848121 master-0 kubenswrapper[7845]: E0223 13:01:08.848110 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.848103054 +0000 UTC m=+4.843833925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:08.848209 master-0 kubenswrapper[7845]: E0223 13:01:08.848151 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:08.848209 master-0 kubenswrapper[7845]: E0223 13:01:08.848168 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.848163876 +0000 UTC m=+4.843894747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:08.848209 master-0 kubenswrapper[7845]: E0223 13:01:08.848201 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:08.848434 master-0 kubenswrapper[7845]: E0223 13:01:08.848219 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.848212608 +0000 UTC m=+4.843943479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:08.921997 master-0 kubenswrapper[7845]: I0223 13:01:08.921936 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-shl6r"] Feb 23 13:01:08.927971 master-0 kubenswrapper[7845]: W0223 13:01:08.927892 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0c7587b_eea6_4d98_b39d_3a0feba4035d.slice/crio-c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417 WatchSource:0}: Error finding container c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417: Status 404 returned error can't find the container with id c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417 Feb 23 13:01:08.948593 master-0 kubenswrapper[7845]: I0223 13:01:08.948532 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:08.948691 master-0 kubenswrapper[7845]: I0223 13:01:08.948608 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:08.948691 master-0 kubenswrapper[7845]: I0223 13:01:08.948637 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:08.948906 master-0 kubenswrapper[7845]: E0223 13:01:08.948870 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:08.948958 master-0 kubenswrapper[7845]: E0223 13:01:08.948940 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.948913839 +0000 UTC m=+4.944644710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:08.949450 master-0 kubenswrapper[7845]: E0223 13:01:08.949415 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:08.949509 master-0 kubenswrapper[7845]: E0223 13:01:08.949467 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.949456656 +0000 UTC m=+4.945187527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:08.949588 master-0 kubenswrapper[7845]: E0223 13:01:08.949524 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:08.949588 master-0 kubenswrapper[7845]: E0223 13:01:08.949548 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:10.949542198 +0000 UTC m=+4.945273069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:09.278225 master-0 kubenswrapper[7845]: I0223 13:01:09.278086 7845 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="e0b0c5dcd2cd007a994c23cec23f8805edde2250fc578b36745a7a529644718b" exitCode=0 Feb 23 13:01:09.278225 master-0 kubenswrapper[7845]: I0223 13:01:09.278175 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerDied","Data":"e0b0c5dcd2cd007a994c23cec23f8805edde2250fc578b36745a7a529644718b"} Feb 23 13:01:09.280974 master-0 kubenswrapper[7845]: I0223 13:01:09.280909 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-shl6r" event={"ID":"d0c7587b-eea6-4d98-b39d-3a0feba4035d","Type":"ContainerStarted","Data":"8a3afd5395cce2bdecf1f2f2b0cbece011eff9331fa483cc7262a842151d5c44"} Feb 23 13:01:09.280974 master-0 kubenswrapper[7845]: I0223 13:01:09.280954 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-shl6r" event={"ID":"d0c7587b-eea6-4d98-b39d-3a0feba4035d","Type":"ContainerStarted","Data":"c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417"} Feb 23 13:01:09.283275 master-0 kubenswrapper[7845]: I0223 13:01:09.282570 7845 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="a097939ffa402c84b79b8f7d24af36dfd241d3d508ee58d590cce7445e784fed" exitCode=0 Feb 23 13:01:09.283275 master-0 kubenswrapper[7845]: I0223 13:01:09.282788 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"a097939ffa402c84b79b8f7d24af36dfd241d3d508ee58d590cce7445e784fed"} Feb 23 13:01:09.668423 master-0 kubenswrapper[7845]: I0223 13:01:09.668107 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:09.672673 master-0 kubenswrapper[7845]: I0223 13:01:09.672503 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:10.179043 master-0 kubenswrapper[7845]: I0223 13:01:10.178992 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:10.730049 master-0 kubenswrapper[7845]: I0223 13:01:10.729927 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:10.765887 master-0 kubenswrapper[7845]: I0223 13:01:10.765827 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:10.797742 master-0 kubenswrapper[7845]: I0223 13:01:10.797637 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:10.803606 master-0 kubenswrapper[7845]: I0223 13:01:10.803544 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:01:10.876455 master-0 kubenswrapper[7845]: I0223 13:01:10.876357 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:10.876455 master-0 kubenswrapper[7845]: I0223 13:01:10.876441 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876527 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876600 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876640 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876697 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876787 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:10.876835 master-0 kubenswrapper[7845]: I0223 13:01:10.876833 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:10.877185 master-0 kubenswrapper[7845]: I0223 13:01:10.876869 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:10.877600 master-0 kubenswrapper[7845]: E0223 13:01:10.877544 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:10.877700 master-0 kubenswrapper[7845]: E0223 13:01:10.877630 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.877605792 +0000 UTC m=+8.873336693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:10.878455 master-0 kubenswrapper[7845]: E0223 13:01:10.878195 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:10.878529 master-0 kubenswrapper[7845]: E0223 13:01:10.878473 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.878239561 +0000 UTC m=+8.873970472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:10.878784 master-0 kubenswrapper[7845]: E0223 13:01:10.878735 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:10.878869 master-0 kubenswrapper[7845]: E0223 13:01:10.878820 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.878794908 +0000 UTC m=+8.874525819 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:10.878984 master-0 kubenswrapper[7845]: E0223 13:01:10.878921 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:10.879076 master-0 kubenswrapper[7845]: E0223 13:01:10.879042 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.879016744 +0000 UTC m=+8.874747615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:10.879076 master-0 kubenswrapper[7845]: E0223 13:01:10.879047 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879102 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879128 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.879121728 +0000 UTC m=+8.874852599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879166 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879172 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.879138858 +0000 UTC m=+8.874869769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879203 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.87918977 +0000 UTC m=+8.874920671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.878958 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:10.879304 master-0 kubenswrapper[7845]: E0223 13:01:10.879296 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.879284643 +0000 UTC m=+8.875015554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:10.880875 master-0 kubenswrapper[7845]: E0223 13:01:10.879537 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:10.880875 master-0 kubenswrapper[7845]: E0223 13:01:10.879628 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.879604272 +0000 UTC m=+8.875335143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:10.979164 master-0 kubenswrapper[7845]: I0223 13:01:10.979065 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:10.979164 master-0 kubenswrapper[7845]: I0223 13:01:10.979159 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:10.979652 master-0 kubenswrapper[7845]: I0223 13:01:10.979198 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:10.980175 master-0 kubenswrapper[7845]: E0223 13:01:10.980078 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:10.980175 master-0 kubenswrapper[7845]: E0223 13:01:10.980112 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:10.980372 master-0 kubenswrapper[7845]: E0223 13:01:10.980200 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.98016481 +0000 UTC m=+8.975895691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:10.980372 master-0 kubenswrapper[7845]: E0223 13:01:10.980273 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.980224911 +0000 UTC m=+8.975955792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:10.981056 master-0 kubenswrapper[7845]: E0223 13:01:10.980849 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:10.981056 master-0 kubenswrapper[7845]: E0223 13:01:10.980914 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:14.980890781 +0000 UTC m=+8.976621842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:11.291283 master-0 kubenswrapper[7845]: I0223 13:01:11.291087 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:11.291283 master-0 kubenswrapper[7845]: I0223 13:01:11.291133 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:12.507655 master-0 kubenswrapper[7845]: I0223 13:01:12.507594 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:12.508133 master-0 kubenswrapper[7845]: I0223 13:01:12.507845 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:12.508133 master-0 kubenswrapper[7845]: I0223 13:01:12.507862 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:12.537393 master-0 kubenswrapper[7845]: I0223 13:01:12.537359 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:13.022896 master-0 kubenswrapper[7845]: I0223 13:01:13.022813 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:13.299956 master-0 kubenswrapper[7845]: I0223 13:01:13.298846 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:13.687095 master-0 kubenswrapper[7845]: I0223 13:01:13.686761 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:13.693573 master-0 kubenswrapper[7845]: I0223 13:01:13.693330 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:14.628304 master-0 kubenswrapper[7845]: I0223 13:01:14.628121 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:14.635532 master-0 kubenswrapper[7845]: I0223 13:01:14.635149 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:14.929410 master-0 kubenswrapper[7845]: I0223 13:01:14.929231 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:14.929410 master-0 kubenswrapper[7845]: I0223 13:01:14.929376 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: I0223 13:01:14.929526 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: E0223 13:01:14.929586 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: I0223 13:01:14.929631 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: E0223 13:01:14.929682 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.929650619 +0000 UTC m=+16.925381690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: I0223 13:01:14.929727 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: E0223 13:01:14.929781 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: I0223 13:01:14.929786 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: E0223 13:01:14.929865 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: I0223 13:01:14.929871 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:14.929901 master-0 kubenswrapper[7845]: E0223 13:01:14.929910 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.929883946 +0000 UTC m=+16.925615057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: I0223 13:01:14.929994 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: E0223 13:01:14.930010 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: I0223 13:01:14.930048 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: E0223 13:01:14.930073 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930052701 +0000 UTC m=+16.925783802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: E0223 13:01:14.930136 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:14.930185 master-0 kubenswrapper[7845]: E0223 13:01:14.930165 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930155644 +0000 UTC m=+16.925886525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930196 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930221 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930212176 +0000 UTC m=+16.925943057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930288 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930312 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930304699 +0000 UTC m=+16.926035580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930382 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:14.930437 master-0 kubenswrapper[7845]: E0223 13:01:14.930411 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930402132 +0000 UTC m=+16.926133263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:14.930610 master-0 kubenswrapper[7845]: E0223 13:01:14.930464 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:14.930610 master-0 kubenswrapper[7845]: E0223 13:01:14.930490 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930481684 +0000 UTC m=+16.926212825 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:14.930610 master-0 kubenswrapper[7845]: E0223 13:01:14.930506 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.930498645 +0000 UTC m=+16.926229756 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:15.031520 master-0 kubenswrapper[7845]: I0223 13:01:15.031184 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:15.031520 master-0 kubenswrapper[7845]: I0223 13:01:15.031357 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:15.031520 master-0 kubenswrapper[7845]: E0223 13:01:15.031391 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:15.031520 master-0 kubenswrapper[7845]: I0223 13:01:15.031401 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:15.031520 master-0 kubenswrapper[7845]: E0223 13:01:15.031491 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.031466734 +0000 UTC m=+17.027197605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:15.031940 master-0 kubenswrapper[7845]: E0223 13:01:15.031539 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:15.031940 master-0 kubenswrapper[7845]: E0223 13:01:15.031539 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:15.031940 master-0 kubenswrapper[7845]: E0223 13:01:15.031598 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.031580208 +0000 UTC m=+17.027311089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:15.031940 master-0 kubenswrapper[7845]: E0223 13:01:15.031737 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.031710242 +0000 UTC m=+17.027441153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:15.484471 master-0 kubenswrapper[7845]: I0223 13:01:15.484372 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:15.488988 master-0 kubenswrapper[7845]: I0223 13:01:15.488917 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:16.314138 master-0 kubenswrapper[7845]: I0223 13:01:16.314056 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:01:18.317464 master-0 kubenswrapper[7845]: I0223 13:01:18.316991 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" event={"ID":"0a80d5ac-27ce-4ba9-809e-28c86b80163b","Type":"ContainerStarted","Data":"1c78631b268af69806ac6e44c535cf690809e56173b2809b3ab9b30ce469dd12"} Feb 23 13:01:18.319050 master-0 kubenswrapper[7845]: I0223 13:01:18.319024 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerStarted","Data":"8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956"} Feb 23 13:01:18.320474 master-0 kubenswrapper[7845]: I0223 13:01:18.320449 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerStarted","Data":"3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9"} Feb 23 13:01:18.321852 master-0 kubenswrapper[7845]: I0223 13:01:18.321827 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerStarted","Data":"debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920"} Feb 23 13:01:18.326713 master-0 kubenswrapper[7845]: I0223 13:01:18.326676 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerStarted","Data":"f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259"} Feb 23 13:01:18.327951 master-0 kubenswrapper[7845]: I0223 13:01:18.327929 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerStarted","Data":"c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39"} Feb 23 13:01:18.329215 master-0 kubenswrapper[7845]: I0223 13:01:18.329189 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62"} Feb 23 13:01:18.330942 master-0 kubenswrapper[7845]: I0223 13:01:18.330914 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c"} Feb 23 13:01:18.331286 master-0 kubenswrapper[7845]: I0223 13:01:18.331266 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:18.332375 master-0 kubenswrapper[7845]: I0223 13:01:18.332353 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerStarted","Data":"e56396e411b12f7186290221f3fddfff3f3b0e11c3f756be37a285081dee7384"} Feb 23 13:01:18.333937 master-0 kubenswrapper[7845]: I0223 13:01:18.333911 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerStarted","Data":"723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830"} Feb 23 13:01:18.337065 master-0 kubenswrapper[7845]: I0223 13:01:18.337018 7845 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="07876e9794bd8ca67f2728050ff6edcd802e3171d1b608edbf504131457eacb4" exitCode=0 Feb 23 13:01:18.337106 master-0 kubenswrapper[7845]: I0223 13:01:18.337092 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerDied","Data":"07876e9794bd8ca67f2728050ff6edcd802e3171d1b608edbf504131457eacb4"} Feb 23 13:01:18.609321 master-0 kubenswrapper[7845]: I0223 13:01:18.608870 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm"] Feb 23 13:01:18.609505 master-0 kubenswrapper[7845]: E0223 13:01:18.609398 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 13:01:18.609505 master-0 kubenswrapper[7845]: I0223 13:01:18.609414 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 13:01:18.609505 master-0 kubenswrapper[7845]: E0223 13:01:18.609424 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:01:18.609505 master-0 kubenswrapper[7845]: I0223 13:01:18.609432 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:01:18.609659 master-0 kubenswrapper[7845]: I0223 13:01:18.609506 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c56df7-2c8d-40d1-b737-7fa8cc661b81" containerName="prober" Feb 23 13:01:18.609659 master-0 kubenswrapper[7845]: I0223 13:01:18.609523 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:01:18.609911 master-0 kubenswrapper[7845]: I0223 13:01:18.609889 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:01:18.612760 master-0 kubenswrapper[7845]: I0223 13:01:18.611574 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm"] Feb 23 13:01:18.698770 master-0 kubenswrapper[7845]: I0223 13:01:18.697884 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgmq\" (UniqueName: \"kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq\") pod \"csi-snapshot-controller-6847bb4785-hgkrm\" (UID: \"4e6bc033-cd90-4704-b03a-8e9c6c0d3904\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:01:18.798684 master-0 kubenswrapper[7845]: I0223 13:01:18.798633 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgmq\" (UniqueName: \"kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq\") pod \"csi-snapshot-controller-6847bb4785-hgkrm\" (UID: \"4e6bc033-cd90-4704-b03a-8e9c6c0d3904\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:01:18.827368 master-0 kubenswrapper[7845]: I0223 13:01:18.821391 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgmq\" (UniqueName: \"kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq\") pod \"csi-snapshot-controller-6847bb4785-hgkrm\" (UID: \"4e6bc033-cd90-4704-b03a-8e9c6c0d3904\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:01:18.957267 master-0 kubenswrapper[7845]: I0223 13:01:18.957123 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:01:19.157436 master-0 kubenswrapper[7845]: I0223 13:01:19.156472 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm"] Feb 23 13:01:19.320739 master-0 kubenswrapper[7845]: W0223 13:01:19.320547 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6bc033_cd90_4704_b03a_8e9c6c0d3904.slice/crio-f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d WatchSource:0}: Error finding container f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d: Status 404 returned error can't find the container with id f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d Feb 23 13:01:19.343633 master-0 kubenswrapper[7845]: I0223 13:01:19.343590 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d"} Feb 23 13:01:19.608721 master-0 kubenswrapper[7845]: I0223 13:01:19.608567 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr"] Feb 23 13:01:19.609081 master-0 kubenswrapper[7845]: I0223 13:01:19.609064 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:01:19.610821 master-0 kubenswrapper[7845]: I0223 13:01:19.610800 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:01:19.611026 master-0 kubenswrapper[7845]: I0223 13:01:19.611010 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:01:19.620960 master-0 kubenswrapper[7845]: I0223 13:01:19.620914 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr"] Feb 23 13:01:19.621886 master-0 kubenswrapper[7845]: I0223 13:01:19.621851 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl"] Feb 23 13:01:19.622409 master-0 kubenswrapper[7845]: I0223 13:01:19.622386 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.625704 master-0 kubenswrapper[7845]: I0223 13:01:19.625661 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:19.625873 master-0 kubenswrapper[7845]: I0223 13:01:19.625853 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:19.625973 master-0 kubenswrapper[7845]: I0223 13:01:19.625949 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:19.626036 master-0 kubenswrapper[7845]: I0223 13:01:19.625986 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:19.626086 master-0 kubenswrapper[7845]: I0223 13:01:19.626077 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:19.626146 master-0 kubenswrapper[7845]: I0223 13:01:19.626125 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:19.673967 master-0 kubenswrapper[7845]: I0223 13:01:19.673896 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl"] Feb 23 13:01:19.708851 master-0 kubenswrapper[7845]: I0223 13:01:19.708813 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.708851 master-0 kubenswrapper[7845]: I0223 13:01:19.708853 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xqc\" (UniqueName: \"kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.709065 master-0 kubenswrapper[7845]: I0223 13:01:19.708905 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.709098 master-0 kubenswrapper[7845]: I0223 13:01:19.709040 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f6sq\" (UniqueName: \"kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq\") pod \"migrator-5c85bff57-xj4vr\" (UID: \"ae5c9120-c38d-46c0-af43-9275563b1a67\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:01:19.709128 master-0 kubenswrapper[7845]: I0223 13:01:19.709103 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.710321 master-0 kubenswrapper[7845]: I0223 13:01:19.709256 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.811956 master-0 kubenswrapper[7845]: I0223 13:01:19.811897 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.811956 master-0 kubenswrapper[7845]: I0223 13:01:19.811955 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xqc\" (UniqueName: \"kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.812191 master-0 kubenswrapper[7845]: I0223 13:01:19.812155 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.812223 master-0 kubenswrapper[7845]: E0223 13:01:19.812142 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 23 13:01:19.812328 master-0 kubenswrapper[7845]: I0223 13:01:19.812181 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f6sq\" (UniqueName: \"kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq\") pod \"migrator-5c85bff57-xj4vr\" (UID: \"ae5c9120-c38d-46c0-af43-9275563b1a67\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:01:19.812408 master-0 kubenswrapper[7845]: E0223 13:01:19.812314 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.312283749 +0000 UTC m=+14.308014620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "config" not found Feb 23 13:01:19.812466 master-0 kubenswrapper[7845]: I0223 13:01:19.812439 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.812597 master-0 kubenswrapper[7845]: I0223 13:01:19.812572 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.812678 master-0 kubenswrapper[7845]: E0223 13:01:19.812645 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 23 13:01:19.812740 master-0 kubenswrapper[7845]: E0223 13:01:19.812721 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.312700982 +0000 UTC m=+14.308431853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "openshift-global-ca" not found Feb 23 13:01:19.813462 master-0 kubenswrapper[7845]: E0223 13:01:19.812777 7845 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:19.813462 master-0 kubenswrapper[7845]: E0223 13:01:19.812836 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:19.813462 master-0 kubenswrapper[7845]: E0223 13:01:19.812878 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.312842886 +0000 UTC m=+14.308573967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : secret "serving-cert" not found Feb 23 13:01:19.813462 master-0 kubenswrapper[7845]: E0223 13:01:19.812900 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:20.312890437 +0000 UTC m=+14.308621548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "client-ca" not found Feb 23 13:01:19.834163 master-0 kubenswrapper[7845]: I0223 13:01:19.834075 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f6sq\" (UniqueName: \"kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq\") pod \"migrator-5c85bff57-xj4vr\" (UID: \"ae5c9120-c38d-46c0-af43-9275563b1a67\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:01:19.847323 master-0 kubenswrapper[7845]: I0223 13:01:19.847283 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xqc\" (UniqueName: \"kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:19.975121 master-0 kubenswrapper[7845]: I0223 13:01:19.975036 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:01:20.155750 master-0 kubenswrapper[7845]: I0223 13:01:20.155702 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr"] Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: I0223 13:01:20.323454 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323657 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: I0223 13:01:20.323702 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: I0223 13:01:20.323741 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323789 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.323752946 +0000 UTC m=+15.319483857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "config" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323840 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323851 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323905 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.32388623 +0000 UTC m=+15.319617101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "openshift-global-ca" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.323920 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.323914311 +0000 UTC m=+15.319645172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "client-ca" not found Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: I0223 13:01:20.323946 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:20.324348 master-0 kubenswrapper[7845]: E0223 13:01:20.324290 7845 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:20.325534 master-0 kubenswrapper[7845]: E0223 13:01:20.324414 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.324387685 +0000 UTC m=+15.320118756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : secret "serving-cert" not found Feb 23 13:01:20.485557 master-0 kubenswrapper[7845]: I0223 13:01:20.484997 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:01:20.788578 master-0 kubenswrapper[7845]: I0223 13:01:20.788077 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl"] Feb 23 13:01:20.789068 master-0 kubenswrapper[7845]: E0223 13:01:20.788998 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" podUID="8da207eb-1fa2-402d-ae8c-2368cd4e108a" Feb 23 13:01:20.792129 master-0 kubenswrapper[7845]: I0223 13:01:20.792099 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc"] Feb 23 13:01:20.792940 master-0 kubenswrapper[7845]: I0223 13:01:20.792757 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:20.796329 master-0 kubenswrapper[7845]: I0223 13:01:20.796086 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:01:20.798120 master-0 kubenswrapper[7845]: I0223 13:01:20.796475 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:01:20.798120 master-0 kubenswrapper[7845]: I0223 13:01:20.796783 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:20.798120 master-0 kubenswrapper[7845]: I0223 13:01:20.797037 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:01:20.798120 master-0 kubenswrapper[7845]: I0223 13:01:20.797235 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:20.806869 master-0 kubenswrapper[7845]: I0223 13:01:20.806819 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc"] Feb 23 13:01:20.932379 master-0 kubenswrapper[7845]: I0223 13:01:20.932317 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2lm6\" (UniqueName: \"kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:20.932632 master-0 kubenswrapper[7845]: I0223 13:01:20.932429 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:20.932632 master-0 kubenswrapper[7845]: I0223 13:01:20.932518 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:20.932632 master-0 kubenswrapper[7845]: I0223 13:01:20.932537 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.034204 master-0 kubenswrapper[7845]: I0223 13:01:21.034006 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.034204 master-0 kubenswrapper[7845]: I0223 13:01:21.034130 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.034204 master-0 kubenswrapper[7845]: I0223 13:01:21.034193 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.034918 master-0 kubenswrapper[7845]: E0223 13:01:21.034413 7845 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:21.034918 master-0 kubenswrapper[7845]: E0223 13:01:21.034502 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.534457672 +0000 UTC m=+15.530188543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : secret "serving-cert" not found Feb 23 13:01:21.035321 master-0 kubenswrapper[7845]: E0223 13:01:21.034961 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:21.035321 master-0 kubenswrapper[7845]: I0223 13:01:21.035107 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2lm6\" (UniqueName: \"kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.035634 master-0 kubenswrapper[7845]: E0223 13:01:21.035556 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:21.535490813 +0000 UTC m=+15.531221724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:21.036288 master-0 kubenswrapper[7845]: I0223 13:01:21.036190 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.057635 master-0 kubenswrapper[7845]: I0223 13:01:21.057581 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2lm6\" (UniqueName: \"kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338033 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nds57"] Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338330 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338433 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338472 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338492 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.338660 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: E0223 13:01:21.339854 7845 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: E0223 13:01:21.339923 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.339904517 +0000 UTC m=+17.335635398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : secret "serving-cert" not found Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.339941 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: E0223 13:01:21.339996 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: E0223 13:01:21.340031 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca podName:8da207eb-1fa2-402d-ae8c-2368cd4e108a nodeName:}" failed. No retries permitted until 2026-02-23 13:01:23.340021311 +0000 UTC m=+17.335752182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca") pod "controller-manager-6c9b8f4d95-wfqnl" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a") : configmap "client-ca" not found Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.340088 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"controller-manager-6c9b8f4d95-wfqnl\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.341546 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.341687 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:01:21.343373 master-0 kubenswrapper[7845]: I0223 13:01:21.342824 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:01:21.345134 master-0 kubenswrapper[7845]: I0223 13:01:21.344374 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:01:21.376483 master-0 kubenswrapper[7845]: I0223 13:01:21.376175 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.376483 master-0 kubenswrapper[7845]: I0223 13:01:21.376229 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-qd2ns" event={"ID":"048f4455-d99a-407b-8674-60efc7aa6ecb","Type":"ContainerStarted","Data":"165a3c60ba04261b8e3a80dfff387d3e06e6e28587856001050eedeb241a47e4"} Feb 23 13:01:21.380328 master-0 kubenswrapper[7845]: I0223 13:01:21.378264 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nds57"] Feb 23 13:01:21.386774 master-0 kubenswrapper[7845]: I0223 13:01:21.386730 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:21.439844 master-0 kubenswrapper[7845]: I0223 13:01:21.439787 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") pod \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " Feb 23 13:01:21.440020 master-0 kubenswrapper[7845]: I0223 13:01:21.439857 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xqc\" (UniqueName: \"kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc\") pod \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " Feb 23 13:01:21.440020 master-0 kubenswrapper[7845]: I0223 13:01:21.439904 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") pod \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\" (UID: \"8da207eb-1fa2-402d-ae8c-2368cd4e108a\") " Feb 23 13:01:21.440108 master-0 kubenswrapper[7845]: I0223 13:01:21.440078 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.440284 master-0 kubenswrapper[7845]: I0223 13:01:21.440238 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.440334 master-0 kubenswrapper[7845]: I0223 13:01:21.440312 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8s7\" (UniqueName: \"kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.440366 master-0 kubenswrapper[7845]: I0223 13:01:21.440332 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config" (OuterVolumeSpecName: "config") pod "8da207eb-1fa2-402d-ae8c-2368cd4e108a" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.441077 master-0 kubenswrapper[7845]: I0223 13:01:21.441045 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8da207eb-1fa2-402d-ae8c-2368cd4e108a" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:21.443041 master-0 kubenswrapper[7845]: I0223 13:01:21.443016 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc" (OuterVolumeSpecName: "kube-api-access-49xqc") pod "8da207eb-1fa2-402d-ae8c-2368cd4e108a" (UID: "8da207eb-1fa2-402d-ae8c-2368cd4e108a"). InnerVolumeSpecName "kube-api-access-49xqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:21.513739 master-0 kubenswrapper[7845]: W0223 13:01:21.513674 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5c9120_c38d_46c0_af43_9275563b1a67.slice/crio-0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9 WatchSource:0}: Error finding container 0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9: Status 404 returned error can't find the container with id 0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9 Feb 23 13:01:21.541256 master-0 kubenswrapper[7845]: I0223 13:01:21.541200 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.541347 master-0 kubenswrapper[7845]: I0223 13:01:21.541319 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8s7\" (UniqueName: \"kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.541410 master-0 kubenswrapper[7845]: I0223 13:01:21.541357 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.541447 master-0 kubenswrapper[7845]: I0223 13:01:21.541412 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:21.541476 master-0 kubenswrapper[7845]: I0223 13:01:21.541451 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.541588 master-0 kubenswrapper[7845]: E0223 13:01:21.541550 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:21.541627 master-0 kubenswrapper[7845]: E0223 13:01:21.541612 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.541592079 +0000 UTC m=+16.537322960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:21.542038 master-0 kubenswrapper[7845]: E0223 13:01:21.542009 7845 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:21.542092 master-0 kubenswrapper[7845]: E0223 13:01:21.542081 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:22.542060043 +0000 UTC m=+16.537791144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : secret "serving-cert" not found Feb 23 13:01:21.542204 master-0 kubenswrapper[7845]: I0223 13:01:21.542167 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.542257 master-0 kubenswrapper[7845]: I0223 13:01:21.542185 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.542293 master-0 kubenswrapper[7845]: I0223 13:01:21.542276 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xqc\" (UniqueName: \"kubernetes.io/projected/8da207eb-1fa2-402d-ae8c-2368cd4e108a-kube-api-access-49xqc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.542618 master-0 kubenswrapper[7845]: I0223 13:01:21.542296 7845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:21.554281 master-0 kubenswrapper[7845]: I0223 13:01:21.546081 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.559731 master-0 kubenswrapper[7845]: I0223 13:01:21.559683 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8s7\" (UniqueName: \"kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.657217 master-0 kubenswrapper[7845]: I0223 13:01:21.657188 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:01:21.903348 master-0 kubenswrapper[7845]: I0223 13:01:21.902891 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nds57"] Feb 23 13:01:22.389465 master-0 kubenswrapper[7845]: I0223 13:01:22.389418 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerStarted","Data":"cde99f61030d5fde6382d6afa69998ae8c2f31acfb6e6f4017c5ade4d9e4754a"} Feb 23 13:01:22.390415 master-0 kubenswrapper[7845]: I0223 13:01:22.390343 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" event={"ID":"ae5c9120-c38d-46c0-af43-9275563b1a67","Type":"ContainerStarted","Data":"0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9"} Feb 23 13:01:22.392295 master-0 kubenswrapper[7845]: I0223 13:01:22.392265 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6"} Feb 23 13:01:22.406226 master-0 kubenswrapper[7845]: I0223 13:01:22.406171 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerStarted","Data":"049f73307f806904035423cc3efd5b594e3e2163521bdc03014ba97dd009ed14"} Feb 23 13:01:22.406226 master-0 kubenswrapper[7845]: I0223 13:01:22.406225 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerStarted","Data":"e402396c861028ad44b45bca58dd0a4df2309cc7110b7c0eb008ea09d7318bee"} Feb 23 13:01:22.406444 master-0 kubenswrapper[7845]: I0223 13:01:22.406346 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl" Feb 23 13:01:22.435312 master-0 kubenswrapper[7845]: I0223 13:01:22.434761 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podStartSLOduration=2.2194569570000002 podStartE2EDuration="4.434742839s" podCreationTimestamp="2026-02-23 13:01:18 +0000 UTC" firstStartedPulling="2026-02-23 13:01:19.325933806 +0000 UTC m=+13.321664717" lastFinishedPulling="2026-02-23 13:01:21.541219728 +0000 UTC m=+15.536950599" observedRunningTime="2026-02-23 13:01:22.43341976 +0000 UTC m=+16.429150641" watchObservedRunningTime="2026-02-23 13:01:22.434742839 +0000 UTC m=+16.430473710" Feb 23 13:01:22.473892 master-0 kubenswrapper[7845]: I0223 13:01:22.473801 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" podStartSLOduration=1.473778031 podStartE2EDuration="1.473778031s" podCreationTimestamp="2026-02-23 13:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:22.456899014 +0000 UTC m=+16.452629935" watchObservedRunningTime="2026-02-23 13:01:22.473778031 +0000 UTC m=+16.469508912" Feb 23 13:01:22.484620 master-0 kubenswrapper[7845]: I0223 13:01:22.484496 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl"] Feb 23 13:01:22.491932 master-0 kubenswrapper[7845]: I0223 13:01:22.491846 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-wfqnl"] Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: I0223 13:01:22.556607 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: I0223 13:01:22.556649 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: I0223 13:01:22.556746 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8da207eb-1fa2-402d-ae8c-2368cd4e108a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: I0223 13:01:22.556757 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8da207eb-1fa2-402d-ae8c-2368cd4e108a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: E0223 13:01:22.557580 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: E0223 13:01:22.557682 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:24.557659028 +0000 UTC m=+18.553389899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: E0223 13:01:22.557677 7845 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:22.560754 master-0 kubenswrapper[7845]: E0223 13:01:22.559072 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:24.55905927 +0000 UTC m=+18.554790161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : secret "serving-cert" not found Feb 23 13:01:22.961015 master-0 kubenswrapper[7845]: I0223 13:01:22.960608 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:22.961212 master-0 kubenswrapper[7845]: I0223 13:01:22.961039 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:22.961212 master-0 kubenswrapper[7845]: E0223 13:01:22.960829 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 23 13:01:22.961212 master-0 kubenswrapper[7845]: I0223 13:01:22.961097 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:22.961212 master-0 kubenswrapper[7845]: E0223 13:01:22.961165 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert podName:da5d5997-e45f-4858-a9a9-e880bc222caf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961144705 +0000 UTC m=+32.956875576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-8tzms" (UID: "da5d5997-e45f-4858-a9a9-e880bc222caf") : secret "package-server-manager-serving-cert" not found Feb 23 13:01:22.961369 master-0 kubenswrapper[7845]: E0223 13:01:22.961261 7845 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 23 13:01:22.961369 master-0 kubenswrapper[7845]: I0223 13:01:22.961321 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:22.961369 master-0 kubenswrapper[7845]: E0223 13:01:22.961345 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert podName:cbcca259-0dbf-48ca-bf90-eec638dcdd10 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.96132303 +0000 UTC m=+32.957054101 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert") pod "olm-operator-5499d7f7bb-g9x74" (UID: "cbcca259-0dbf-48ca-bf90-eec638dcdd10") : secret "olm-operator-serving-cert" not found Feb 23 13:01:22.961369 master-0 kubenswrapper[7845]: E0223 13:01:22.961265 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 23 13:01:22.961482 master-0 kubenswrapper[7845]: I0223 13:01:22.961382 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:22.961482 master-0 kubenswrapper[7845]: E0223 13:01:22.961398 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961382872 +0000 UTC m=+32.957113753 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "node-tuning-operator-tls" not found Feb 23 13:01:22.961482 master-0 kubenswrapper[7845]: I0223 13:01:22.961413 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:22.961482 master-0 kubenswrapper[7845]: I0223 13:01:22.961455 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:22.961482 master-0 kubenswrapper[7845]: I0223 13:01:22.961481 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961457 7845 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: I0223 13:01:22.961522 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961531 7845 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961575 7845 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961588 7845 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961534 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls podName:ee436961-c305-4c84-b4f9-175e1d8004fb nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961522256 +0000 UTC m=+32.957253137 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-b2xcd" (UID: "ee436961-c305-4c84-b4f9-175e1d8004fb") : secret "cluster-monitoring-operator-tls" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961495 7845 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961620 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls podName:dcd03d6e-4c8c-400a-8001-343aaeeca93b nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961609319 +0000 UTC m=+32.957340210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls") pod "ingress-operator-6569778c84-gswst" (UID: "dcd03d6e-4c8c-400a-8001-343aaeeca93b") : secret "metrics-tls" not found Feb 23 13:01:22.961623 master-0 kubenswrapper[7845]: E0223 13:01:22.961634 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert podName:a3dfb271-a659-45e0-b51d-5e99ec43b555 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961626169 +0000 UTC m=+32.957357040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-6llwl" (UID: "a3dfb271-a659-45e0-b51d-5e99ec43b555") : secret "performance-addon-operator-webhook-cert" not found Feb 23 13:01:22.961895 master-0 kubenswrapper[7845]: E0223 13:01:22.961651 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics podName:1d953c37-1b74-4ce5-89cb-b3f53454fc57 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.9616431 +0000 UTC m=+32.957373971 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-28zcz" (UID: "1d953c37-1b74-4ce5-89cb-b3f53454fc57") : secret "marketplace-operator-metrics" not found Feb 23 13:01:22.961895 master-0 kubenswrapper[7845]: E0223 13:01:22.961662 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert podName:b053c311-07fd-45bb-ab10-6e7b76c9aa48 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.96165707 +0000 UTC m=+32.957387951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert") pod "cluster-version-operator-5cfd9759cf-lfpt7" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48") : secret "cluster-version-operator-serving-cert" not found Feb 23 13:01:22.961895 master-0 kubenswrapper[7845]: E0223 13:01:22.961677 7845 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 23 13:01:22.961895 master-0 kubenswrapper[7845]: E0223 13:01:22.961739 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls podName:8a406f63-eeeb-4da3-a1d0-86b5ab5d802c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.961722372 +0000 UTC m=+32.957453243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-7rb6v" (UID: "8a406f63-eeeb-4da3-a1d0-86b5ab5d802c") : secret "image-registry-operator-tls" not found Feb 23 13:01:23.063008 master-0 kubenswrapper[7845]: I0223 13:01:23.062940 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:23.063008 master-0 kubenswrapper[7845]: I0223 13:01:23.062992 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:23.063396 master-0 kubenswrapper[7845]: E0223 13:01:23.063162 7845 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 23 13:01:23.063396 master-0 kubenswrapper[7845]: E0223 13:01:23.063274 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls podName:08577c3c-73d8-47f4-ba30-aec11af51d40 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:39.063232708 +0000 UTC m=+33.058963589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls") pod "dns-operator-8c7d49845-7466r" (UID: "08577c3c-73d8-47f4-ba30-aec11af51d40") : secret "metrics-tls" not found Feb 23 13:01:23.063729 master-0 kubenswrapper[7845]: I0223 13:01:23.063695 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:23.063969 master-0 kubenswrapper[7845]: E0223 13:01:23.063937 7845 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 23 13:01:23.064051 master-0 kubenswrapper[7845]: E0223 13:01:23.063979 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs podName:e7fbab55-8405-44f4-ae2a-412c115ce411 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:39.06396866 +0000 UTC m=+33.059699541 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs") pod "network-metrics-daemon-kq2rk" (UID: "e7fbab55-8405-44f4-ae2a-412c115ce411") : secret "metrics-daemon-secret" not found Feb 23 13:01:23.064051 master-0 kubenswrapper[7845]: E0223 13:01:23.064039 7845 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 23 13:01:23.064210 master-0 kubenswrapper[7845]: E0223 13:01:23.064062 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs podName:44b07d33-6e84-434e-9a14-431846620968 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:39.064054843 +0000 UTC m=+33.059785724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-8hstp" (UID: "44b07d33-6e84-434e-9a14-431846620968") : secret "multus-admission-controller-secret" not found Feb 23 13:01:23.414263 master-0 kubenswrapper[7845]: I0223 13:01:23.411338 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" event={"ID":"ae5c9120-c38d-46c0-af43-9275563b1a67","Type":"ContainerStarted","Data":"3609616f554d61d1b46bfa07ef8c04186f81487177b1570bdf745483969649ba"} Feb 23 13:01:24.175437 master-0 kubenswrapper[7845]: I0223 13:01:24.175374 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-557bf46fb-8ljrl"] Feb 23 13:01:24.175952 master-0 kubenswrapper[7845]: I0223 13:01:24.175915 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.179659 master-0 kubenswrapper[7845]: I0223 13:01:24.179605 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:24.179844 master-0 kubenswrapper[7845]: I0223 13:01:24.179681 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:24.179844 master-0 kubenswrapper[7845]: I0223 13:01:24.179767 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:24.181219 master-0 kubenswrapper[7845]: I0223 13:01:24.181177 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:24.181918 master-0 kubenswrapper[7845]: I0223 13:01:24.181885 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:24.187060 master-0 kubenswrapper[7845]: I0223 13:01:24.186985 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:24.196390 master-0 kubenswrapper[7845]: I0223 13:01:24.196276 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557bf46fb-8ljrl"] Feb 23 13:01:24.211586 master-0 kubenswrapper[7845]: I0223 13:01:24.211519 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da207eb-1fa2-402d-ae8c-2368cd4e108a" path="/var/lib/kubelet/pods/8da207eb-1fa2-402d-ae8c-2368cd4e108a/volumes" Feb 23 13:01:24.279702 master-0 kubenswrapper[7845]: I0223 13:01:24.279628 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.279702 master-0 kubenswrapper[7845]: I0223 13:01:24.279683 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.280042 master-0 kubenswrapper[7845]: I0223 13:01:24.279831 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nrb\" (UniqueName: \"kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.280042 master-0 kubenswrapper[7845]: I0223 13:01:24.279984 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.280169 master-0 kubenswrapper[7845]: I0223 13:01:24.280069 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.381118 master-0 kubenswrapper[7845]: I0223 13:01:24.381020 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.381470 master-0 kubenswrapper[7845]: I0223 13:01:24.381408 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.381612 master-0 kubenswrapper[7845]: I0223 13:01:24.381557 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.381708 master-0 kubenswrapper[7845]: I0223 13:01:24.381633 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.381904 master-0 kubenswrapper[7845]: E0223 13:01:24.381820 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:24.381904 master-0 kubenswrapper[7845]: I0223 13:01:24.381875 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nrb\" (UniqueName: \"kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.382097 master-0 kubenswrapper[7845]: E0223 13:01:24.381977 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:24.881934627 +0000 UTC m=+18.877665678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:24.382097 master-0 kubenswrapper[7845]: E0223 13:01:24.382014 7845 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:24.382346 master-0 kubenswrapper[7845]: E0223 13:01:24.382202 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:24.882167294 +0000 UTC m=+18.877898175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : secret "serving-cert" not found Feb 23 13:01:24.383469 master-0 kubenswrapper[7845]: I0223 13:01:24.383402 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.383632 master-0 kubenswrapper[7845]: I0223 13:01:24.383592 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.418483 master-0 kubenswrapper[7845]: I0223 13:01:24.418404 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nrb\" (UniqueName: \"kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.419409 master-0 kubenswrapper[7845]: I0223 13:01:24.418520 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" event={"ID":"ae5c9120-c38d-46c0-af43-9275563b1a67","Type":"ContainerStarted","Data":"6acfe3e5b118d229d4853a718d69c87a387cc69e29740e6ae74dca8cc5b1b3b9"} Feb 23 13:01:24.435952 master-0 kubenswrapper[7845]: I0223 13:01:24.435787 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" podStartSLOduration=3.729173643 podStartE2EDuration="5.435756222s" podCreationTimestamp="2026-02-23 13:01:19 +0000 UTC" firstStartedPulling="2026-02-23 13:01:21.533434394 +0000 UTC m=+15.529165265" lastFinishedPulling="2026-02-23 13:01:23.240016973 +0000 UTC m=+17.235747844" observedRunningTime="2026-02-23 13:01:24.434107063 +0000 UTC m=+18.429837964" watchObservedRunningTime="2026-02-23 13:01:24.435756222 +0000 UTC m=+18.431487123" Feb 23 13:01:24.585147 master-0 kubenswrapper[7845]: I0223 13:01:24.585052 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:24.585147 master-0 kubenswrapper[7845]: I0223 13:01:24.585112 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:24.585522 master-0 kubenswrapper[7845]: E0223 13:01:24.585326 7845 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:24.585522 master-0 kubenswrapper[7845]: E0223 13:01:24.585447 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:28.585414823 +0000 UTC m=+22.581145724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : secret "serving-cert" not found Feb 23 13:01:24.585522 master-0 kubenswrapper[7845]: E0223 13:01:24.585504 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:24.585705 master-0 kubenswrapper[7845]: E0223 13:01:24.585578 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:28.585559307 +0000 UTC m=+22.581290188 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:24.892035 master-0 kubenswrapper[7845]: I0223 13:01:24.891926 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.892035 master-0 kubenswrapper[7845]: I0223 13:01:24.892033 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:24.892595 master-0 kubenswrapper[7845]: E0223 13:01:24.892535 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:24.892724 master-0 kubenswrapper[7845]: E0223 13:01:24.892681 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:25.892643712 +0000 UTC m=+19.888374623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:24.893064 master-0 kubenswrapper[7845]: E0223 13:01:24.893006 7845 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:24.893147 master-0 kubenswrapper[7845]: E0223 13:01:24.893124 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:25.893096715 +0000 UTC m=+19.888827616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : secret "serving-cert" not found Feb 23 13:01:25.911748 master-0 kubenswrapper[7845]: I0223 13:01:25.911065 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:25.911748 master-0 kubenswrapper[7845]: E0223 13:01:25.911353 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:25.912665 master-0 kubenswrapper[7845]: I0223 13:01:25.911657 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:25.912665 master-0 kubenswrapper[7845]: E0223 13:01:25.911807 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:27.911767222 +0000 UTC m=+21.907498143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:25.917992 master-0 kubenswrapper[7845]: I0223 13:01:25.917917 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:27.948900 master-0 kubenswrapper[7845]: I0223 13:01:27.948823 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:27.950205 master-0 kubenswrapper[7845]: E0223 13:01:27.949081 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:27.950205 master-0 kubenswrapper[7845]: E0223 13:01:27.949236 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:31.949206768 +0000 UTC m=+25.944937679 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:28.660416 master-0 kubenswrapper[7845]: I0223 13:01:28.659945 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:28.660768 master-0 kubenswrapper[7845]: I0223 13:01:28.660467 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:28.660768 master-0 kubenswrapper[7845]: E0223 13:01:28.660143 7845 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 23 13:01:28.660768 master-0 kubenswrapper[7845]: E0223 13:01:28.660642 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:28.660768 master-0 kubenswrapper[7845]: E0223 13:01:28.660697 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:36.660659866 +0000 UTC m=+30.656390777 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : secret "serving-cert" not found Feb 23 13:01:28.660768 master-0 kubenswrapper[7845]: E0223 13:01:28.660740 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:36.660719958 +0000 UTC m=+30.656450869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:30.191848 master-0 kubenswrapper[7845]: I0223 13:01:30.191735 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:30.193160 master-0 kubenswrapper[7845]: I0223 13:01:30.192115 7845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:01:30.223421 master-0 kubenswrapper[7845]: I0223 13:01:30.223364 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:01:30.587525 master-0 kubenswrapper[7845]: I0223 13:01:30.587348 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-568db89b47-fbwml"] Feb 23 13:01:30.588860 master-0 kubenswrapper[7845]: I0223 13:01:30.588785 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.592759 master-0 kubenswrapper[7845]: I0223 13:01:30.592671 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:30.592759 master-0 kubenswrapper[7845]: I0223 13:01:30.592726 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:01:30.593120 master-0 kubenswrapper[7845]: I0223 13:01:30.593062 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 23 13:01:30.593484 master-0 kubenswrapper[7845]: I0223 13:01:30.593384 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:01:30.593851 master-0 kubenswrapper[7845]: I0223 13:01:30.593790 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:01:30.594297 master-0 kubenswrapper[7845]: I0223 13:01:30.594230 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 23 13:01:30.594453 master-0 kubenswrapper[7845]: I0223 13:01:30.594327 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:01:30.597201 master-0 kubenswrapper[7845]: I0223 13:01:30.597152 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:01:30.600302 master-0 kubenswrapper[7845]: I0223 13:01:30.600227 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:01:30.608537 master-0 kubenswrapper[7845]: I0223 13:01:30.608447 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:01:30.623227 master-0 kubenswrapper[7845]: I0223 13:01:30.623159 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-568db89b47-fbwml"] Feb 23 13:01:30.692416 master-0 kubenswrapper[7845]: I0223 13:01:30.692327 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxf9\" (UniqueName: \"kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.692416 master-0 kubenswrapper[7845]: I0223 13:01:30.692420 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.692825 master-0 kubenswrapper[7845]: I0223 13:01:30.692486 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.692825 master-0 kubenswrapper[7845]: I0223 13:01:30.692658 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.692825 master-0 kubenswrapper[7845]: I0223 13:01:30.692721 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.692825 master-0 kubenswrapper[7845]: I0223 13:01:30.692818 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.693466 master-0 kubenswrapper[7845]: I0223 13:01:30.692852 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.693466 master-0 kubenswrapper[7845]: I0223 13:01:30.692920 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.693466 master-0 kubenswrapper[7845]: I0223 13:01:30.693398 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.694091 master-0 kubenswrapper[7845]: I0223 13:01:30.693488 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.694091 master-0 kubenswrapper[7845]: I0223 13:01:30.693656 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.794931 master-0 kubenswrapper[7845]: I0223 13:01:30.794848 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.795217 master-0 kubenswrapper[7845]: I0223 13:01:30.795135 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.795297 master-0 kubenswrapper[7845]: I0223 13:01:30.795209 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxf9\" (UniqueName: \"kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.795600 master-0 kubenswrapper[7845]: I0223 13:01:30.795555 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.795865 master-0 kubenswrapper[7845]: I0223 13:01:30.795820 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.795968 master-0 kubenswrapper[7845]: I0223 13:01:30.795928 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796023 master-0 kubenswrapper[7845]: E0223 13:01:30.795828 7845 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:30.796069 master-0 kubenswrapper[7845]: I0223 13:01:30.796023 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796184 master-0 kubenswrapper[7845]: E0223 13:01:30.796147 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:31.296076152 +0000 UTC m=+25.291807033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : secret "serving-cert" not found Feb 23 13:01:30.796184 master-0 kubenswrapper[7845]: I0223 13:01:30.796165 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796353 master-0 kubenswrapper[7845]: I0223 13:01:30.796323 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796406 master-0 kubenswrapper[7845]: I0223 13:01:30.796371 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796728 master-0 kubenswrapper[7845]: I0223 13:01:30.796657 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796847 master-0 kubenswrapper[7845]: I0223 13:01:30.796800 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796847 master-0 kubenswrapper[7845]: I0223 13:01:30.796826 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.796938 master-0 kubenswrapper[7845]: I0223 13:01:30.796881 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.797275 master-0 kubenswrapper[7845]: E0223 13:01:30.797037 7845 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:30.797396 master-0 kubenswrapper[7845]: E0223 13:01:30.797353 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:31.29732156 +0000 UTC m=+25.293052471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : configmap "audit-0" not found Feb 23 13:01:30.798174 master-0 kubenswrapper[7845]: I0223 13:01:30.798121 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.798453 master-0 kubenswrapper[7845]: I0223 13:01:30.798406 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.799813 master-0 kubenswrapper[7845]: I0223 13:01:30.799757 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.803831 master-0 kubenswrapper[7845]: I0223 13:01:30.803785 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.805539 master-0 kubenswrapper[7845]: I0223 13:01:30.805488 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:30.827866 master-0 kubenswrapper[7845]: I0223 13:01:30.827800 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxf9\" (UniqueName: \"kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:31.305442 master-0 kubenswrapper[7845]: I0223 13:01:31.305322 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:31.306491 master-0 kubenswrapper[7845]: E0223 13:01:31.305610 7845 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:31.306491 master-0 kubenswrapper[7845]: I0223 13:01:31.305743 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:31.306491 master-0 kubenswrapper[7845]: E0223 13:01:31.305821 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:32.305783416 +0000 UTC m=+26.301514327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : configmap "audit-0" not found Feb 23 13:01:31.306491 master-0 kubenswrapper[7845]: E0223 13:01:31.305963 7845 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:31.306491 master-0 kubenswrapper[7845]: E0223 13:01:31.306084 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:32.306051634 +0000 UTC m=+26.301782545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : secret "serving-cert" not found Feb 23 13:01:32.015938 master-0 kubenswrapper[7845]: I0223 13:01:32.015872 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:32.016373 master-0 kubenswrapper[7845]: E0223 13:01:32.016090 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:32.016373 master-0 kubenswrapper[7845]: E0223 13:01:32.016191 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:40.016164952 +0000 UTC m=+34.011896013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:32.320549 master-0 kubenswrapper[7845]: I0223 13:01:32.320347 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:32.321483 master-0 kubenswrapper[7845]: E0223 13:01:32.320590 7845 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:32.321483 master-0 kubenswrapper[7845]: E0223 13:01:32.320731 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:34.32069687 +0000 UTC m=+28.316427781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : secret "serving-cert" not found Feb 23 13:01:32.321483 master-0 kubenswrapper[7845]: I0223 13:01:32.320867 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:32.321483 master-0 kubenswrapper[7845]: E0223 13:01:32.321060 7845 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:32.321483 master-0 kubenswrapper[7845]: E0223 13:01:32.321156 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:34.321131503 +0000 UTC m=+28.316862404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : configmap "audit-0" not found Feb 23 13:01:34.348699 master-0 kubenswrapper[7845]: I0223 13:01:34.348343 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:34.349693 master-0 kubenswrapper[7845]: I0223 13:01:34.349613 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") pod \"apiserver-568db89b47-fbwml\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:34.349944 master-0 kubenswrapper[7845]: E0223 13:01:34.348556 7845 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 23 13:01:34.350030 master-0 kubenswrapper[7845]: E0223 13:01:34.350006 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.349974001 +0000 UTC m=+32.345704912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : configmap "audit-0" not found Feb 23 13:01:34.350418 master-0 kubenswrapper[7845]: E0223 13:01:34.349889 7845 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 23 13:01:34.350587 master-0 kubenswrapper[7845]: E0223 13:01:34.350553 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert podName:f359387d-fd8c-4748-a937-a1389b6b3495 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:38.350495327 +0000 UTC m=+32.346226238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert") pod "apiserver-568db89b47-fbwml" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495") : secret "serving-cert" not found Feb 23 13:01:34.528509 master-0 kubenswrapper[7845]: I0223 13:01:34.528427 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl"] Feb 23 13:01:34.529738 master-0 kubenswrapper[7845]: I0223 13:01:34.529674 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.543805 master-0 kubenswrapper[7845]: I0223 13:01:34.543726 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:01:34.544439 master-0 kubenswrapper[7845]: I0223 13:01:34.544388 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:01:34.555941 master-0 kubenswrapper[7845]: I0223 13:01:34.555860 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:01:34.657659 master-0 kubenswrapper[7845]: I0223 13:01:34.657491 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.658018 master-0 kubenswrapper[7845]: I0223 13:01:34.657935 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.658113 master-0 kubenswrapper[7845]: I0223 13:01:34.658036 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.658336 master-0 kubenswrapper[7845]: I0223 13:01:34.658285 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.658429 master-0 kubenswrapper[7845]: I0223 13:01:34.658403 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l49w\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.759756 master-0 kubenswrapper[7845]: I0223 13:01:34.759658 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760041 master-0 kubenswrapper[7845]: I0223 13:01:34.759815 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l49w\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760115 master-0 kubenswrapper[7845]: I0223 13:01:34.760054 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760364 master-0 kubenswrapper[7845]: I0223 13:01:34.760292 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760364 master-0 kubenswrapper[7845]: I0223 13:01:34.760325 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760513 master-0 kubenswrapper[7845]: I0223 13:01:34.760464 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760830 master-0 kubenswrapper[7845]: I0223 13:01:34.760771 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.760936 master-0 kubenswrapper[7845]: I0223 13:01:34.760772 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.766877 master-0 kubenswrapper[7845]: I0223 13:01:34.766822 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:34.791394 master-0 kubenswrapper[7845]: I0223 13:01:34.791312 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl"] Feb 23 13:01:35.272732 master-0 kubenswrapper[7845]: I0223 13:01:35.272641 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l49w\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:35.463312 master-0 kubenswrapper[7845]: I0223 13:01:35.462974 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:35.734328 master-0 kubenswrapper[7845]: I0223 13:01:35.727898 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6"] Feb 23 13:01:35.734328 master-0 kubenswrapper[7845]: I0223 13:01:35.728961 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.759557 master-0 kubenswrapper[7845]: I0223 13:01:35.759095 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:01:35.759557 master-0 kubenswrapper[7845]: I0223 13:01:35.759281 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:01:35.759557 master-0 kubenswrapper[7845]: I0223 13:01:35.759444 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:01:35.779434 master-0 kubenswrapper[7845]: I0223 13:01:35.772673 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6"] Feb 23 13:01:35.790823 master-0 kubenswrapper[7845]: I0223 13:01:35.790733 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl"] Feb 23 13:01:35.809559 master-0 kubenswrapper[7845]: I0223 13:01:35.809510 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:01:35.829733 master-0 kubenswrapper[7845]: I0223 13:01:35.829689 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-568db89b47-fbwml"] Feb 23 13:01:35.830296 master-0 kubenswrapper[7845]: E0223 13:01:35.830187 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-568db89b47-fbwml" podUID="f359387d-fd8c-4748-a937-a1389b6b3495" Feb 23 13:01:35.880036 master-0 kubenswrapper[7845]: I0223 13:01:35.879921 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.880036 master-0 kubenswrapper[7845]: I0223 13:01:35.879960 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.880332 master-0 kubenswrapper[7845]: I0223 13:01:35.880069 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.880332 master-0 kubenswrapper[7845]: I0223 13:01:35.880089 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.880332 master-0 kubenswrapper[7845]: I0223 13:01:35.880118 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.880332 master-0 kubenswrapper[7845]: I0223 13:01:35.880133 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsvs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981546 master-0 kubenswrapper[7845]: I0223 13:01:35.981484 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981546 master-0 kubenswrapper[7845]: I0223 13:01:35.981543 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981786 master-0 kubenswrapper[7845]: I0223 13:01:35.981584 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981786 master-0 kubenswrapper[7845]: I0223 13:01:35.981608 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsvs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981786 master-0 kubenswrapper[7845]: I0223 13:01:35.981659 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981786 master-0 kubenswrapper[7845]: I0223 13:01:35.981681 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981939 master-0 kubenswrapper[7845]: I0223 13:01:35.981814 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.981939 master-0 kubenswrapper[7845]: E0223 13:01:35.981919 7845 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Feb 23 13:01:35.982017 master-0 kubenswrapper[7845]: E0223 13:01:35.981973 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs podName:bfbb4d6d-7047-48cb-be03-97a57fc688e3 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:36.481952721 +0000 UTC m=+30.477683592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs") pod "catalogd-controller-manager-84b8d9d697-bckd6" (UID: "bfbb4d6d-7047-48cb-be03-97a57fc688e3") : secret "catalogserver-cert" not found Feb 23 13:01:35.982017 master-0 kubenswrapper[7845]: I0223 13:01:35.982001 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.983629 master-0 kubenswrapper[7845]: I0223 13:01:35.982414 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:35.986913 master-0 kubenswrapper[7845]: I0223 13:01:35.986880 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:36.005872 master-0 kubenswrapper[7845]: I0223 13:01:36.005805 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsvs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:36.487524 master-0 kubenswrapper[7845]: I0223 13:01:36.487446 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:36.489055 master-0 kubenswrapper[7845]: E0223 13:01:36.487774 7845 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Feb 23 13:01:36.489055 master-0 kubenswrapper[7845]: E0223 13:01:36.487886 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs podName:bfbb4d6d-7047-48cb-be03-97a57fc688e3 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:37.487852521 +0000 UTC m=+31.483583802 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs") pod "catalogd-controller-manager-84b8d9d697-bckd6" (UID: "bfbb4d6d-7047-48cb-be03-97a57fc688e3") : secret "catalogserver-cert" not found Feb 23 13:01:36.504629 master-0 kubenswrapper[7845]: I0223 13:01:36.504546 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:36.504629 master-0 kubenswrapper[7845]: I0223 13:01:36.504570 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerStarted","Data":"f787a879efc5d4242ecd95b4dc2b9421807d998730d3c7d0198ac608a22e096d"} Feb 23 13:01:36.504992 master-0 kubenswrapper[7845]: I0223 13:01:36.504658 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerStarted","Data":"49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26"} Feb 23 13:01:36.504992 master-0 kubenswrapper[7845]: I0223 13:01:36.504686 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerStarted","Data":"e8a55e200b06071852324dd5becc03353e4f62598f3846b794dbf08621f93e39"} Feb 23 13:01:36.518761 master-0 kubenswrapper[7845]: I0223 13:01:36.518707 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:36.543423 master-0 kubenswrapper[7845]: I0223 13:01:36.543335 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" podStartSLOduration=2.543309595 podStartE2EDuration="2.543309595s" podCreationTimestamp="2026-02-23 13:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:36.54148242 +0000 UTC m=+30.537213331" watchObservedRunningTime="2026-02-23 13:01:36.543309595 +0000 UTC m=+30.539040496" Feb 23 13:01:36.692089 master-0 kubenswrapper[7845]: I0223 13:01:36.691713 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkxf9\" (UniqueName: \"kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692089 master-0 kubenswrapper[7845]: I0223 13:01:36.692102 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692145 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692170 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692202 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692202 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692237 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692442 master-0 kubenswrapper[7845]: I0223 13:01:36.692343 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692808 master-0 kubenswrapper[7845]: I0223 13:01:36.692510 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692808 master-0 kubenswrapper[7845]: I0223 13:01:36.692572 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client\") pod \"f359387d-fd8c-4748-a937-a1389b6b3495\" (UID: \"f359387d-fd8c-4748-a937-a1389b6b3495\") " Feb 23 13:01:36.692808 master-0 kubenswrapper[7845]: I0223 13:01:36.692446 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:01:36.692808 master-0 kubenswrapper[7845]: I0223 13:01:36.692728 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config" (OuterVolumeSpecName: "config") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.692808 master-0 kubenswrapper[7845]: I0223 13:01:36.692748 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.693210 master-0 kubenswrapper[7845]: I0223 13:01:36.693126 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.693350 master-0 kubenswrapper[7845]: I0223 13:01:36.693310 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:36.693422 master-0 kubenswrapper[7845]: I0223 13:01:36.693374 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:36.694036 master-0 kubenswrapper[7845]: E0223 13:01:36.693988 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:36.694108 master-0 kubenswrapper[7845]: E0223 13:01:36.694080 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca podName:9ff5f614-bdb1-411b-9578-6c28bdeddfbf nodeName:}" failed. No retries permitted until 2026-02-23 13:01:52.694052108 +0000 UTC m=+46.689783009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca") pod "route-controller-manager-7966944567-cqfvc" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf") : configmap "client-ca" not found Feb 23 13:01:36.694857 master-0 kubenswrapper[7845]: I0223 13:01:36.694808 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.694857 master-0 kubenswrapper[7845]: I0223 13:01:36.694849 7845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.695007 master-0 kubenswrapper[7845]: I0223 13:01:36.694872 7845 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.695007 master-0 kubenswrapper[7845]: I0223 13:01:36.694893 7845 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.695007 master-0 kubenswrapper[7845]: I0223 13:01:36.694919 7845 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f359387d-fd8c-4748-a937-a1389b6b3495-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.696171 master-0 kubenswrapper[7845]: I0223 13:01:36.696105 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:36.697421 master-0 kubenswrapper[7845]: I0223 13:01:36.697335 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:36.697517 master-0 kubenswrapper[7845]: I0223 13:01:36.697340 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9" (OuterVolumeSpecName: "kube-api-access-wkxf9") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "kube-api-access-wkxf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:36.697517 master-0 kubenswrapper[7845]: I0223 13:01:36.697445 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f359387d-fd8c-4748-a937-a1389b6b3495" (UID: "f359387d-fd8c-4748-a937-a1389b6b3495"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:36.698004 master-0 kubenswrapper[7845]: I0223 13:01:36.697953 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"route-controller-manager-7966944567-cqfvc\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:36.796649 master-0 kubenswrapper[7845]: I0223 13:01:36.796509 7845 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.796649 master-0 kubenswrapper[7845]: I0223 13:01:36.796563 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkxf9\" (UniqueName: \"kubernetes.io/projected/f359387d-fd8c-4748-a937-a1389b6b3495-kube-api-access-wkxf9\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.796649 master-0 kubenswrapper[7845]: I0223 13:01:36.796586 7845 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.796649 master-0 kubenswrapper[7845]: I0223 13:01:36.796608 7845 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:36.929963 master-0 kubenswrapper[7845]: I0223 13:01:36.929903 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:36.930797 master-0 kubenswrapper[7845]: I0223 13:01:36.930761 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:36.933613 master-0 kubenswrapper[7845]: I0223 13:01:36.933566 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 23 13:01:36.941678 master-0 kubenswrapper[7845]: I0223 13:01:36.941651 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:37.101158 master-0 kubenswrapper[7845]: I0223 13:01:37.100995 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.101158 master-0 kubenswrapper[7845]: I0223 13:01:37.101094 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.101404 master-0 kubenswrapper[7845]: I0223 13:01:37.101366 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.204341 master-0 kubenswrapper[7845]: I0223 13:01:37.204267 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.204341 master-0 kubenswrapper[7845]: I0223 13:01:37.204342 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.204577 master-0 kubenswrapper[7845]: I0223 13:01:37.204470 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.204708 master-0 kubenswrapper[7845]: I0223 13:01:37.204668 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.204847 master-0 kubenswrapper[7845]: I0223 13:01:37.204793 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.234336 master-0 kubenswrapper[7845]: I0223 13:01:37.234306 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.264265 master-0 kubenswrapper[7845]: I0223 13:01:37.264180 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:01:37.508771 master-0 kubenswrapper[7845]: I0223 13:01:37.508387 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-568db89b47-fbwml" Feb 23 13:01:37.519708 master-0 kubenswrapper[7845]: I0223 13:01:37.508425 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:37.519708 master-0 kubenswrapper[7845]: I0223 13:01:37.508816 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:37.519708 master-0 kubenswrapper[7845]: E0223 13:01:37.509167 7845 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Feb 23 13:01:37.519708 master-0 kubenswrapper[7845]: E0223 13:01:37.509296 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs podName:bfbb4d6d-7047-48cb-be03-97a57fc688e3 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:39.50926299 +0000 UTC m=+33.504993891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs") pod "catalogd-controller-manager-84b8d9d697-bckd6" (UID: "bfbb4d6d-7047-48cb-be03-97a57fc688e3") : secret "catalogserver-cert" not found Feb 23 13:01:37.583605 master-0 kubenswrapper[7845]: I0223 13:01:37.583532 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6dcf85cb46-cmf75"] Feb 23 13:01:37.584748 master-0 kubenswrapper[7845]: I0223 13:01:37.584674 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.590399 master-0 kubenswrapper[7845]: I0223 13:01:37.590356 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:01:37.591111 master-0 kubenswrapper[7845]: I0223 13:01:37.591083 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:01:37.591523 master-0 kubenswrapper[7845]: I0223 13:01:37.591497 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:01:37.592305 master-0 kubenswrapper[7845]: I0223 13:01:37.592278 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:01:37.592734 master-0 kubenswrapper[7845]: I0223 13:01:37.592707 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:01:37.593887 master-0 kubenswrapper[7845]: I0223 13:01:37.593847 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:01:37.594369 master-0 kubenswrapper[7845]: I0223 13:01:37.594343 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:01:37.594753 master-0 kubenswrapper[7845]: I0223 13:01:37.594728 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:37.595155 master-0 kubenswrapper[7845]: I0223 13:01:37.595130 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:01:37.621188 master-0 kubenswrapper[7845]: I0223 13:01:37.621061 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:01:37.621682 master-0 kubenswrapper[7845]: I0223 13:01:37.621127 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-568db89b47-fbwml"] Feb 23 13:01:37.623934 master-0 kubenswrapper[7845]: I0223 13:01:37.623907 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:37.626187 master-0 kubenswrapper[7845]: I0223 13:01:37.626142 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-568db89b47-fbwml"] Feb 23 13:01:37.626314 master-0 kubenswrapper[7845]: I0223 13:01:37.626191 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6dcf85cb46-cmf75"] Feb 23 13:01:37.720953 master-0 kubenswrapper[7845]: I0223 13:01:37.720777 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.720953 master-0 kubenswrapper[7845]: I0223 13:01:37.720840 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.720953 master-0 kubenswrapper[7845]: I0223 13:01:37.720870 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbl2g\" (UniqueName: \"kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.721576 master-0 kubenswrapper[7845]: I0223 13:01:37.721034 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.721576 master-0 kubenswrapper[7845]: I0223 13:01:37.721095 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.721576 master-0 kubenswrapper[7845]: I0223 13:01:37.721117 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.721576 master-0 kubenswrapper[7845]: I0223 13:01:37.721212 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.722449 master-0 kubenswrapper[7845]: I0223 13:01:37.722162 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.722449 master-0 kubenswrapper[7845]: I0223 13:01:37.722369 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.722602 master-0 kubenswrapper[7845]: I0223 13:01:37.722452 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.722602 master-0 kubenswrapper[7845]: I0223 13:01:37.722501 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823673 master-0 kubenswrapper[7845]: I0223 13:01:37.823594 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823840 master-0 kubenswrapper[7845]: I0223 13:01:37.823679 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823840 master-0 kubenswrapper[7845]: I0223 13:01:37.823720 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823840 master-0 kubenswrapper[7845]: I0223 13:01:37.823803 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823993 master-0 kubenswrapper[7845]: I0223 13:01:37.823854 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823993 master-0 kubenswrapper[7845]: I0223 13:01:37.823907 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.823993 master-0 kubenswrapper[7845]: I0223 13:01:37.823942 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.824106 master-0 kubenswrapper[7845]: I0223 13:01:37.824037 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.824106 master-0 kubenswrapper[7845]: I0223 13:01:37.824072 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.824204 master-0 kubenswrapper[7845]: I0223 13:01:37.824106 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbl2g\" (UniqueName: \"kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.824204 master-0 kubenswrapper[7845]: I0223 13:01:37.824169 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.824889 master-0 kubenswrapper[7845]: I0223 13:01:37.824330 7845 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f359387d-fd8c-4748-a937-a1389b6b3495-audit\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:37.824889 master-0 kubenswrapper[7845]: I0223 13:01:37.824356 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f359387d-fd8c-4748-a937-a1389b6b3495-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:37.825844 master-0 kubenswrapper[7845]: I0223 13:01:37.825396 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.825844 master-0 kubenswrapper[7845]: I0223 13:01:37.825589 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.827033 master-0 kubenswrapper[7845]: I0223 13:01:37.826990 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.827336 master-0 kubenswrapper[7845]: I0223 13:01:37.827303 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.827671 master-0 kubenswrapper[7845]: I0223 13:01:37.827636 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.827844 master-0 kubenswrapper[7845]: I0223 13:01:37.827800 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.828013 master-0 kubenswrapper[7845]: I0223 13:01:37.827966 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.832746 master-0 kubenswrapper[7845]: I0223 13:01:37.832706 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.833045 master-0 kubenswrapper[7845]: I0223 13:01:37.833024 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.834342 master-0 kubenswrapper[7845]: I0223 13:01:37.833790 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.842616 master-0 kubenswrapper[7845]: I0223 13:01:37.842583 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbl2g\" (UniqueName: \"kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:37.945327 master-0 kubenswrapper[7845]: I0223 13:01:37.945275 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:38.216178 master-0 kubenswrapper[7845]: I0223 13:01:38.215769 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f359387d-fd8c-4748-a937-a1389b6b3495" path="/var/lib/kubelet/pods/f359387d-fd8c-4748-a937-a1389b6b3495/volumes" Feb 23 13:01:38.246549 master-0 kubenswrapper[7845]: I0223 13:01:38.246432 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6dcf85cb46-cmf75"] Feb 23 13:01:38.259217 master-0 kubenswrapper[7845]: W0223 13:01:38.259125 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc159d5f4_5c95_4600_80ec_a17a419cfd7a.slice/crio-3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea WatchSource:0}: Error finding container 3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea: Status 404 returned error can't find the container with id 3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea Feb 23 13:01:38.515393 master-0 kubenswrapper[7845]: I0223 13:01:38.515326 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" event={"ID":"c159d5f4-5c95-4600-80ec-a17a419cfd7a","Type":"ContainerStarted","Data":"3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea"} Feb 23 13:01:38.520456 master-0 kubenswrapper[7845]: I0223 13:01:38.520363 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a6ff6aee-649e-4ee8-9f73-eb3517297706","Type":"ContainerStarted","Data":"f97091b8d61792d1be2f0eb4a50b8a9ee548a1277d9101dba04451e10f5f3331"} Feb 23 13:01:38.520456 master-0 kubenswrapper[7845]: I0223 13:01:38.520408 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a6ff6aee-649e-4ee8-9f73-eb3517297706","Type":"ContainerStarted","Data":"4be5c18a6c854aadb8ace6a50f8dda1fa624ebf315d80592a6eb921cac92c0d3"} Feb 23 13:01:38.540080 master-0 kubenswrapper[7845]: I0223 13:01:38.539974 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.539948797 podStartE2EDuration="2.539948797s" podCreationTimestamp="2026-02-23 13:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:38.536138762 +0000 UTC m=+32.531869673" watchObservedRunningTime="2026-02-23 13:01:38.539948797 +0000 UTC m=+32.535679698" Feb 23 13:01:38.632314 master-0 kubenswrapper[7845]: I0223 13:01:38.629475 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557bf46fb-8ljrl"] Feb 23 13:01:38.632314 master-0 kubenswrapper[7845]: E0223 13:01:38.629806 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" podUID="d11074ac-1ee4-447e-883d-b78a5a03176f" Feb 23 13:01:38.669117 master-0 kubenswrapper[7845]: I0223 13:01:38.663868 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc"] Feb 23 13:01:38.669117 master-0 kubenswrapper[7845]: E0223 13:01:38.664140 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" podUID="9ff5f614-bdb1-411b-9578-6c28bdeddfbf" Feb 23 13:01:39.043642 master-0 kubenswrapper[7845]: I0223 13:01:39.043547 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:39.043976 master-0 kubenswrapper[7845]: I0223 13:01:39.043679 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:39.044578 master-0 kubenswrapper[7845]: I0223 13:01:39.044542 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:39.045276 master-0 kubenswrapper[7845]: I0223 13:01:39.045206 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:39.045437 master-0 kubenswrapper[7845]: I0223 13:01:39.045319 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:39.045437 master-0 kubenswrapper[7845]: I0223 13:01:39.045407 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:39.045616 master-0 kubenswrapper[7845]: I0223 13:01:39.045456 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:39.045616 master-0 kubenswrapper[7845]: I0223 13:01:39.045512 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:39.045616 master-0 kubenswrapper[7845]: I0223 13:01:39.045606 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:39.055309 master-0 kubenswrapper[7845]: I0223 13:01:39.051681 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:39.058616 master-0 kubenswrapper[7845]: I0223 13:01:39.058545 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:39.061288 master-0 kubenswrapper[7845]: I0223 13:01:39.059727 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:39.061288 master-0 kubenswrapper[7845]: I0223 13:01:39.060355 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:39.061288 master-0 kubenswrapper[7845]: I0223 13:01:39.061138 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-lfpt7\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:39.061728 master-0 kubenswrapper[7845]: I0223 13:01:39.061680 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:39.062232 master-0 kubenswrapper[7845]: I0223 13:01:39.062194 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:39.062945 master-0 kubenswrapper[7845]: I0223 13:01:39.062903 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:39.063833 master-0 kubenswrapper[7845]: I0223 13:01:39.063782 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:39.147060 master-0 kubenswrapper[7845]: I0223 13:01:39.146984 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:39.147366 master-0 kubenswrapper[7845]: I0223 13:01:39.147193 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:39.147366 master-0 kubenswrapper[7845]: I0223 13:01:39.147257 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:39.152435 master-0 kubenswrapper[7845]: I0223 13:01:39.152368 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:39.152604 master-0 kubenswrapper[7845]: I0223 13:01:39.152556 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:39.154566 master-0 kubenswrapper[7845]: I0223 13:01:39.154502 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:39.251415 master-0 kubenswrapper[7845]: I0223 13:01:39.251332 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:39.256613 master-0 kubenswrapper[7845]: I0223 13:01:39.256553 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:39.259747 master-0 kubenswrapper[7845]: I0223 13:01:39.259700 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:01:39.265774 master-0 kubenswrapper[7845]: I0223 13:01:39.265730 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:01:39.267679 master-0 kubenswrapper[7845]: I0223 13:01:39.267629 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:01:39.276668 master-0 kubenswrapper[7845]: I0223 13:01:39.268573 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:01:39.276668 master-0 kubenswrapper[7845]: I0223 13:01:39.271378 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:39.276668 master-0 kubenswrapper[7845]: I0223 13:01:39.271676 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:01:39.276668 master-0 kubenswrapper[7845]: I0223 13:01:39.272462 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:01:39.276668 master-0 kubenswrapper[7845]: I0223 13:01:39.274182 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:01:39.284054 master-0 kubenswrapper[7845]: I0223 13:01:39.276994 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:01:39.362287 master-0 kubenswrapper[7845]: W0223 13:01:39.361944 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb053c311_07fd_45bb_ab10_6e7b76c9aa48.slice/crio-fa3167a637f939e5683169cc2e4072a308d730dd71812369b7848e7a51a319c7 WatchSource:0}: Error finding container fa3167a637f939e5683169cc2e4072a308d730dd71812369b7848e7a51a319c7: Status 404 returned error can't find the container with id fa3167a637f939e5683169cc2e4072a308d730dd71812369b7848e7a51a319c7 Feb 23 13:01:39.533864 master-0 kubenswrapper[7845]: I0223 13:01:39.533722 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:39.533864 master-0 kubenswrapper[7845]: I0223 13:01:39.533759 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:39.533864 master-0 kubenswrapper[7845]: I0223 13:01:39.533813 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" event={"ID":"b053c311-07fd-45bb-ab10-6e7b76c9aa48","Type":"ContainerStarted","Data":"fa3167a637f939e5683169cc2e4072a308d730dd71812369b7848e7a51a319c7"} Feb 23 13:01:39.554448 master-0 kubenswrapper[7845]: I0223 13:01:39.553372 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:39.556187 master-0 kubenswrapper[7845]: I0223 13:01:39.556140 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74"] Feb 23 13:01:39.558412 master-0 kubenswrapper[7845]: I0223 13:01:39.558370 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:39.559658 master-0 kubenswrapper[7845]: I0223 13:01:39.559625 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:39.571987 master-0 kubenswrapper[7845]: I0223 13:01:39.571936 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:39.652289 master-0 kubenswrapper[7845]: I0223 13:01:39.652227 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-28zcz"] Feb 23 13:01:39.666931 master-0 kubenswrapper[7845]: W0223 13:01:39.666869 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d953c37_1b74_4ce5_89cb_b3f53454fc57.slice/crio-1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c WatchSource:0}: Error finding container 1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c: Status 404 returned error can't find the container with id 1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c Feb 23 13:01:39.741761 master-0 kubenswrapper[7845]: I0223 13:01:39.741711 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:39.754755 master-0 kubenswrapper[7845]: I0223 13:01:39.754713 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config\") pod \"d11074ac-1ee4-447e-883d-b78a5a03176f\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " Feb 23 13:01:39.754874 master-0 kubenswrapper[7845]: I0223 13:01:39.754765 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2lm6\" (UniqueName: \"kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6\") pod \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " Feb 23 13:01:39.754874 master-0 kubenswrapper[7845]: I0223 13:01:39.754802 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7nrb\" (UniqueName: \"kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb\") pod \"d11074ac-1ee4-447e-883d-b78a5a03176f\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " Feb 23 13:01:39.754874 master-0 kubenswrapper[7845]: I0223 13:01:39.754821 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") pod \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " Feb 23 13:01:39.754874 master-0 kubenswrapper[7845]: I0223 13:01:39.754843 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles\") pod \"d11074ac-1ee4-447e-883d-b78a5a03176f\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " Feb 23 13:01:39.754874 master-0 kubenswrapper[7845]: I0223 13:01:39.754872 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config\") pod \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\" (UID: \"9ff5f614-bdb1-411b-9578-6c28bdeddfbf\") " Feb 23 13:01:39.755070 master-0 kubenswrapper[7845]: I0223 13:01:39.754890 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") pod \"d11074ac-1ee4-447e-883d-b78a5a03176f\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " Feb 23 13:01:39.756549 master-0 kubenswrapper[7845]: I0223 13:01:39.756493 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config" (OuterVolumeSpecName: "config") pod "d11074ac-1ee4-447e-883d-b78a5a03176f" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:39.759215 master-0 kubenswrapper[7845]: I0223 13:01:39.759160 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d11074ac-1ee4-447e-883d-b78a5a03176f" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:39.760108 master-0 kubenswrapper[7845]: I0223 13:01:39.760055 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d11074ac-1ee4-447e-883d-b78a5a03176f" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:39.760784 master-0 kubenswrapper[7845]: I0223 13:01:39.760740 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ff5f614-bdb1-411b-9578-6c28bdeddfbf" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:01:39.760894 master-0 kubenswrapper[7845]: I0223 13:01:39.759925 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config" (OuterVolumeSpecName: "config") pod "9ff5f614-bdb1-411b-9578-6c28bdeddfbf" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:01:39.760970 master-0 kubenswrapper[7845]: I0223 13:01:39.760919 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6" (OuterVolumeSpecName: "kube-api-access-s2lm6") pod "9ff5f614-bdb1-411b-9578-6c28bdeddfbf" (UID: "9ff5f614-bdb1-411b-9578-6c28bdeddfbf"). InnerVolumeSpecName "kube-api-access-s2lm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:39.761018 master-0 kubenswrapper[7845]: I0223 13:01:39.760939 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb" (OuterVolumeSpecName: "kube-api-access-z7nrb") pod "d11074ac-1ee4-447e-883d-b78a5a03176f" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f"). InnerVolumeSpecName "kube-api-access-z7nrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:01:39.829294 master-0 kubenswrapper[7845]: I0223 13:01:39.829233 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-7466r"] Feb 23 13:01:39.835498 master-0 kubenswrapper[7845]: W0223 13:01:39.835450 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08577c3c_73d8_47f4_ba30_aec11af51d40.slice/crio-4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3 WatchSource:0}: Error finding container 4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3: Status 404 returned error can't find the container with id 4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3 Feb 23 13:01:39.857368 master-0 kubenswrapper[7845]: I0223 13:01:39.857321 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857368 master-0 kubenswrapper[7845]: I0223 13:01:39.857359 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11074ac-1ee4-447e-883d-b78a5a03176f-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857368 master-0 kubenswrapper[7845]: I0223 13:01:39.857368 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857368 master-0 kubenswrapper[7845]: I0223 13:01:39.857378 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2lm6\" (UniqueName: \"kubernetes.io/projected/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-kube-api-access-s2lm6\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857368 master-0 kubenswrapper[7845]: I0223 13:01:39.857387 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7nrb\" (UniqueName: \"kubernetes.io/projected/d11074ac-1ee4-447e-883d-b78a5a03176f-kube-api-access-z7nrb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857645 master-0 kubenswrapper[7845]: I0223 13:01:39.857396 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.857645 master-0 kubenswrapper[7845]: I0223 13:01:39.857405 7845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:39.922580 master-0 kubenswrapper[7845]: I0223 13:01:39.921990 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6"] Feb 23 13:01:39.936896 master-0 kubenswrapper[7845]: W0223 13:01:39.936818 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbb4d6d_7047_48cb_be03_97a57fc688e3.slice/crio-7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1 WatchSource:0}: Error finding container 7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1: Status 404 returned error can't find the container with id 7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1 Feb 23 13:01:39.949372 master-0 kubenswrapper[7845]: I0223 13:01:39.949326 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl"] Feb 23 13:01:39.949372 master-0 kubenswrapper[7845]: I0223 13:01:39.949379 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:01:39.958878 master-0 kubenswrapper[7845]: W0223 13:01:39.958837 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3dfb271_a659_45e0_b51d_5e99ec43b555.slice/crio-2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e WatchSource:0}: Error finding container 2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e: Status 404 returned error can't find the container with id 2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e Feb 23 13:01:39.977278 master-0 kubenswrapper[7845]: W0223 13:01:39.969490 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b07d33_6e84_434e_9a14_431846620968.slice/crio-f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950 WatchSource:0}: Error finding container f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950: Status 404 returned error can't find the container with id f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950 Feb 23 13:01:40.004481 master-0 kubenswrapper[7845]: I0223 13:01:40.004437 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v"] Feb 23 13:01:40.004481 master-0 kubenswrapper[7845]: I0223 13:01:40.004492 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd"] Feb 23 13:01:40.005739 master-0 kubenswrapper[7845]: I0223 13:01:40.005711 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms"] Feb 23 13:01:40.010742 master-0 kubenswrapper[7845]: I0223 13:01:40.010702 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2rk"] Feb 23 13:01:40.011776 master-0 kubenswrapper[7845]: I0223 13:01:40.011741 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-gswst"] Feb 23 13:01:40.019834 master-0 kubenswrapper[7845]: W0223 13:01:40.019790 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a406f63_eeeb_4da3_a1d0_86b5ab5d802c.slice/crio-2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe WatchSource:0}: Error finding container 2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe: Status 404 returned error can't find the container with id 2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe Feb 23 13:01:40.020647 master-0 kubenswrapper[7845]: W0223 13:01:40.020614 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee436961_c305_4c84_b4f9_175e1d8004fb.slice/crio-18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3 WatchSource:0}: Error finding container 18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3: Status 404 returned error can't find the container with id 18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3 Feb 23 13:01:40.034928 master-0 kubenswrapper[7845]: W0223 13:01:40.034488 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7fbab55_8405_44f4_ae2a_412c115ce411.slice/crio-0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b WatchSource:0}: Error finding container 0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b: Status 404 returned error can't find the container with id 0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b Feb 23 13:01:40.038900 master-0 kubenswrapper[7845]: W0223 13:01:40.038869 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd03d6e_4c8c_400a_8001_343aaeeca93b.slice/crio-5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c WatchSource:0}: Error finding container 5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c: Status 404 returned error can't find the container with id 5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c Feb 23 13:01:40.059151 master-0 kubenswrapper[7845]: I0223 13:01:40.059110 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") pod \"controller-manager-557bf46fb-8ljrl\" (UID: \"d11074ac-1ee4-447e-883d-b78a5a03176f\") " pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:40.059308 master-0 kubenswrapper[7845]: E0223 13:01:40.059262 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:40.059365 master-0 kubenswrapper[7845]: E0223 13:01:40.059320 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca podName:d11074ac-1ee4-447e-883d-b78a5a03176f nodeName:}" failed. No retries permitted until 2026-02-23 13:01:56.059303747 +0000 UTC m=+50.055034618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca") pod "controller-manager-557bf46fb-8ljrl" (UID: "d11074ac-1ee4-447e-883d-b78a5a03176f") : configmap "client-ca" not found Feb 23 13:01:40.182774 master-0 kubenswrapper[7845]: I0223 13:01:40.182737 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:01:40.540703 master-0 kubenswrapper[7845]: I0223 13:01:40.540654 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" event={"ID":"dcd03d6e-4c8c-400a-8001-343aaeeca93b","Type":"ContainerStarted","Data":"5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c"} Feb 23 13:01:40.542307 master-0 kubenswrapper[7845]: I0223 13:01:40.542283 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerStarted","Data":"2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e"} Feb 23 13:01:40.543307 master-0 kubenswrapper[7845]: I0223 13:01:40.543285 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" event={"ID":"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c","Type":"ContainerStarted","Data":"2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe"} Feb 23 13:01:40.544922 master-0 kubenswrapper[7845]: I0223 13:01:40.544893 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerStarted","Data":"1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c"} Feb 23 13:01:40.546113 master-0 kubenswrapper[7845]: I0223 13:01:40.546084 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerStarted","Data":"f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950"} Feb 23 13:01:40.548191 master-0 kubenswrapper[7845]: I0223 13:01:40.548161 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerStarted","Data":"b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e"} Feb 23 13:01:40.548191 master-0 kubenswrapper[7845]: I0223 13:01:40.548186 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerStarted","Data":"03efc28194de33d2ec07ae6162b6263cb9291732ea47b0d97c7caffde4cb8bb2"} Feb 23 13:01:40.548268 master-0 kubenswrapper[7845]: I0223 13:01:40.548196 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerStarted","Data":"7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1"} Feb 23 13:01:40.549096 master-0 kubenswrapper[7845]: I0223 13:01:40.549067 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:40.551373 master-0 kubenswrapper[7845]: I0223 13:01:40.551351 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" event={"ID":"da5d5997-e45f-4858-a9a9-e880bc222caf","Type":"ContainerStarted","Data":"e6875ea1b4393f9f8786542e9f5187cabcb76208aa9dd29aac9cb6595992a918"} Feb 23 13:01:40.551503 master-0 kubenswrapper[7845]: I0223 13:01:40.551375 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" event={"ID":"da5d5997-e45f-4858-a9a9-e880bc222caf","Type":"ContainerStarted","Data":"a8422896f1ec2ab46d73c67a22baefed99a0b0d0ea311d5d1f05da3156542ea9"} Feb 23 13:01:40.553280 master-0 kubenswrapper[7845]: I0223 13:01:40.553256 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" event={"ID":"08577c3c-73d8-47f4-ba30-aec11af51d40","Type":"ContainerStarted","Data":"4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3"} Feb 23 13:01:40.554558 master-0 kubenswrapper[7845]: I0223 13:01:40.554532 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" event={"ID":"cbcca259-0dbf-48ca-bf90-eec638dcdd10","Type":"ContainerStarted","Data":"ae5797327ba541f955d9212090aad83a203cfcaad025e64f727a371889902b1b"} Feb 23 13:01:40.556236 master-0 kubenswrapper[7845]: I0223 13:01:40.556216 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" event={"ID":"ee436961-c305-4c84-b4f9-175e1d8004fb","Type":"ContainerStarted","Data":"18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3"} Feb 23 13:01:40.560043 master-0 kubenswrapper[7845]: I0223 13:01:40.560024 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557bf46fb-8ljrl" Feb 23 13:01:40.560416 master-0 kubenswrapper[7845]: I0223 13:01:40.560397 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2rk" event={"ID":"e7fbab55-8405-44f4-ae2a-412c115ce411","Type":"ContainerStarted","Data":"0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b"} Feb 23 13:01:40.560472 master-0 kubenswrapper[7845]: I0223 13:01:40.560434 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc" Feb 23 13:01:40.568702 master-0 kubenswrapper[7845]: I0223 13:01:40.568649 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podStartSLOduration=5.568626011 podStartE2EDuration="5.568626011s" podCreationTimestamp="2026-02-23 13:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:40.565723823 +0000 UTC m=+34.561454694" watchObservedRunningTime="2026-02-23 13:01:40.568626011 +0000 UTC m=+34.564356882" Feb 23 13:01:40.597331 master-0 kubenswrapper[7845]: I0223 13:01:40.597045 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557bf46fb-8ljrl"] Feb 23 13:01:40.604436 master-0 kubenswrapper[7845]: I0223 13:01:40.604029 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc4b4775-6vdrk"] Feb 23 13:01:40.604824 master-0 kubenswrapper[7845]: I0223 13:01:40.604803 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-557bf46fb-8ljrl"] Feb 23 13:01:40.604921 master-0 kubenswrapper[7845]: I0223 13:01:40.604898 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.606981 master-0 kubenswrapper[7845]: I0223 13:01:40.606870 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:40.607118 master-0 kubenswrapper[7845]: I0223 13:01:40.607091 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:01:40.607583 master-0 kubenswrapper[7845]: I0223 13:01:40.607202 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:01:40.607583 master-0 kubenswrapper[7845]: I0223 13:01:40.607363 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:01:40.607583 master-0 kubenswrapper[7845]: I0223 13:01:40.607557 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:40.610099 master-0 kubenswrapper[7845]: I0223 13:01:40.610068 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc4b4775-6vdrk"] Feb 23 13:01:40.656270 master-0 kubenswrapper[7845]: I0223 13:01:40.656213 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:01:40.665500 master-0 kubenswrapper[7845]: I0223 13:01:40.665468 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc"] Feb 23 13:01:40.670066 master-0 kubenswrapper[7845]: I0223 13:01:40.670019 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966944567-cqfvc"] Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.766839 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.766883 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.766941 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2gm\" (UniqueName: \"kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.766970 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.766997 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.767957 master-0 kubenswrapper[7845]: I0223 13:01:40.767032 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11074ac-1ee4-447e-883d-b78a5a03176f-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.868507 master-0 kubenswrapper[7845]: I0223 13:01:40.868422 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.868507 master-0 kubenswrapper[7845]: I0223 13:01:40.868474 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.868747 master-0 kubenswrapper[7845]: I0223 13:01:40.868702 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.868896 master-0 kubenswrapper[7845]: I0223 13:01:40.868869 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2gm\" (UniqueName: \"kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.869066 master-0 kubenswrapper[7845]: E0223 13:01:40.869040 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:40.869118 master-0 kubenswrapper[7845]: E0223 13:01:40.869103 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:41.369083506 +0000 UTC m=+35.364814377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:40.869375 master-0 kubenswrapper[7845]: I0223 13:01:40.869349 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.869418 master-0 kubenswrapper[7845]: I0223 13:01:40.869390 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5f614-bdb1-411b-9578-6c28bdeddfbf-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:01:40.870045 master-0 kubenswrapper[7845]: I0223 13:01:40.870006 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.874699 master-0 kubenswrapper[7845]: I0223 13:01:40.874401 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.891180 master-0 kubenswrapper[7845]: I0223 13:01:40.891153 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2gm\" (UniqueName: \"kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:40.891340 master-0 kubenswrapper[7845]: I0223 13:01:40.891301 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:41.374190 master-0 kubenswrapper[7845]: I0223 13:01:41.374143 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:41.374425 master-0 kubenswrapper[7845]: E0223 13:01:41.374331 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:41.374467 master-0 kubenswrapper[7845]: E0223 13:01:41.374424 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:42.374400539 +0000 UTC m=+36.370131410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:42.210415 master-0 kubenswrapper[7845]: I0223 13:01:42.210364 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff5f614-bdb1-411b-9578-6c28bdeddfbf" path="/var/lib/kubelet/pods/9ff5f614-bdb1-411b-9578-6c28bdeddfbf/volumes" Feb 23 13:01:42.211182 master-0 kubenswrapper[7845]: I0223 13:01:42.210729 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11074ac-1ee4-447e-883d-b78a5a03176f" path="/var/lib/kubelet/pods/d11074ac-1ee4-447e-883d-b78a5a03176f/volumes" Feb 23 13:01:42.383075 master-0 kubenswrapper[7845]: I0223 13:01:42.383028 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:42.383310 master-0 kubenswrapper[7845]: E0223 13:01:42.383150 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:42.383310 master-0 kubenswrapper[7845]: E0223 13:01:42.383206 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:44.383190008 +0000 UTC m=+38.378920879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.163730 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p"] Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.164368 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167264 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167309 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167455 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167700 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167790 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.167959 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:01:43.171373 master-0 kubenswrapper[7845]: I0223 13:01:43.169216 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:01:43.172705 master-0 kubenswrapper[7845]: I0223 13:01:43.171736 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:01:43.177072 master-0 kubenswrapper[7845]: I0223 13:01:43.177029 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p"] Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.198298 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2"] Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.198864 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.201875 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpqn\" (UniqueName: \"kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.201912 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.201933 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.201965 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.202010 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.202027 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.202053 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.204452 master-0 kubenswrapper[7845]: I0223 13:01:43.202071 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.208633 master-0 kubenswrapper[7845]: I0223 13:01:43.208594 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:01:43.208880 master-0 kubenswrapper[7845]: I0223 13:01:43.208860 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:01:43.209009 master-0 kubenswrapper[7845]: I0223 13:01:43.208985 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:01:43.209112 master-0 kubenswrapper[7845]: I0223 13:01:43.209095 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:43.209152 master-0 kubenswrapper[7845]: I0223 13:01:43.208600 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:01:43.213598 master-0 kubenswrapper[7845]: I0223 13:01:43.213546 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2"] Feb 23 13:01:43.304638 master-0 kubenswrapper[7845]: I0223 13:01:43.304585 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.304862 master-0 kubenswrapper[7845]: I0223 13:01:43.304660 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.304862 master-0 kubenswrapper[7845]: I0223 13:01:43.304691 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.304862 master-0 kubenswrapper[7845]: I0223 13:01:43.304718 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.304862 master-0 kubenswrapper[7845]: I0223 13:01:43.304745 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.304862 master-0 kubenswrapper[7845]: I0223 13:01:43.304837 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.305478 master-0 kubenswrapper[7845]: I0223 13:01:43.305444 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.305549 master-0 kubenswrapper[7845]: I0223 13:01:43.305488 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.305549 master-0 kubenswrapper[7845]: I0223 13:01:43.304777 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.305631 master-0 kubenswrapper[7845]: I0223 13:01:43.305589 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpqn\" (UniqueName: \"kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.306059 master-0 kubenswrapper[7845]: I0223 13:01:43.305619 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.306059 master-0 kubenswrapper[7845]: I0223 13:01:43.306043 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.306169 master-0 kubenswrapper[7845]: I0223 13:01:43.306138 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.306498 master-0 kubenswrapper[7845]: I0223 13:01:43.306463 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frbw\" (UniqueName: \"kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.306573 master-0 kubenswrapper[7845]: I0223 13:01:43.306528 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.306619 master-0 kubenswrapper[7845]: I0223 13:01:43.306570 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.311324 master-0 kubenswrapper[7845]: I0223 13:01:43.309393 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.311324 master-0 kubenswrapper[7845]: I0223 13:01:43.311237 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.321075 master-0 kubenswrapper[7845]: I0223 13:01:43.321039 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpqn\" (UniqueName: \"kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.327088 master-0 kubenswrapper[7845]: I0223 13:01:43.327054 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.408535 master-0 kubenswrapper[7845]: I0223 13:01:43.408469 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.408701 master-0 kubenswrapper[7845]: I0223 13:01:43.408612 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.408701 master-0 kubenswrapper[7845]: I0223 13:01:43.408679 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frbw\" (UniqueName: \"kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.409003 master-0 kubenswrapper[7845]: I0223 13:01:43.408959 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.410030 master-0 kubenswrapper[7845]: E0223 13:01:43.409955 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:43.410202 master-0 kubenswrapper[7845]: E0223 13:01:43.410110 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:43.910054961 +0000 UTC m=+37.905785832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:43.410470 master-0 kubenswrapper[7845]: I0223 13:01:43.410437 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.430888 master-0 kubenswrapper[7845]: I0223 13:01:43.430790 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.432738 master-0 kubenswrapper[7845]: I0223 13:01:43.432696 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frbw\" (UniqueName: \"kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.506811 master-0 kubenswrapper[7845]: I0223 13:01:43.506744 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:01:43.915795 master-0 kubenswrapper[7845]: I0223 13:01:43.915709 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:43.916046 master-0 kubenswrapper[7845]: E0223 13:01:43.915935 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:43.916098 master-0 kubenswrapper[7845]: E0223 13:01:43.916053 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:44.916026994 +0000 UTC m=+38.911757875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:44.421877 master-0 kubenswrapper[7845]: I0223 13:01:44.421783 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:44.422769 master-0 kubenswrapper[7845]: E0223 13:01:44.421907 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:44.422769 master-0 kubenswrapper[7845]: E0223 13:01:44.421965 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:48.421948235 +0000 UTC m=+42.417679106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:44.926842 master-0 kubenswrapper[7845]: I0223 13:01:44.926774 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:44.927092 master-0 kubenswrapper[7845]: E0223 13:01:44.927059 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:44.927163 master-0 kubenswrapper[7845]: E0223 13:01:44.927139 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:46.927118433 +0000 UTC m=+40.922849314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:45.470444 master-0 kubenswrapper[7845]: I0223 13:01:45.468168 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:01:45.939494 master-0 kubenswrapper[7845]: I0223 13:01:45.939372 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:01:45.940004 master-0 kubenswrapper[7845]: I0223 13:01:45.939848 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="a6ff6aee-649e-4ee8-9f73-eb3517297706" containerName="installer" containerID="cri-o://f97091b8d61792d1be2f0eb4a50b8a9ee548a1277d9101dba04451e10f5f3331" gracePeriod=30 Feb 23 13:01:46.951712 master-0 kubenswrapper[7845]: I0223 13:01:46.951650 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:46.952486 master-0 kubenswrapper[7845]: E0223 13:01:46.951718 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:46.952486 master-0 kubenswrapper[7845]: E0223 13:01:46.951798 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:50.951776846 +0000 UTC m=+44.947507717 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:48.330658 master-0 kubenswrapper[7845]: I0223 13:01:48.330595 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:48.331637 master-0 kubenswrapper[7845]: I0223 13:01:48.331614 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.338983 master-0 kubenswrapper[7845]: I0223 13:01:48.338763 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:48.470993 master-0 kubenswrapper[7845]: I0223 13:01:48.470934 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:48.471208 master-0 kubenswrapper[7845]: I0223 13:01:48.471024 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.471208 master-0 kubenswrapper[7845]: E0223 13:01:48.471097 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:48.471208 master-0 kubenswrapper[7845]: E0223 13:01:48.471149 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:56.471132326 +0000 UTC m=+50.466863197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:48.471433 master-0 kubenswrapper[7845]: I0223 13:01:48.471371 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.471492 master-0 kubenswrapper[7845]: I0223 13:01:48.471465 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.572635 master-0 kubenswrapper[7845]: I0223 13:01:48.572558 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.572845 master-0 kubenswrapper[7845]: I0223 13:01:48.572690 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.572845 master-0 kubenswrapper[7845]: I0223 13:01:48.572799 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.572941 master-0 kubenswrapper[7845]: I0223 13:01:48.572894 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.573044 master-0 kubenswrapper[7845]: I0223 13:01:48.573008 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.604880 master-0 kubenswrapper[7845]: I0223 13:01:48.604786 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access\") pod \"installer-2-master-0\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:48.715063 master-0 kubenswrapper[7845]: I0223 13:01:48.714985 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:01:49.745848 master-0 kubenswrapper[7845]: I0223 13:01:49.745780 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:01:51.000714 master-0 kubenswrapper[7845]: I0223 13:01:51.000656 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:51.002008 master-0 kubenswrapper[7845]: E0223 13:01:51.000848 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:51.002008 master-0 kubenswrapper[7845]: E0223 13:01:51.000970 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:01:59.000936866 +0000 UTC m=+52.996667817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:52.179904 master-0 kubenswrapper[7845]: I0223 13:01:52.177856 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p"] Feb 23 13:01:52.199261 master-0 kubenswrapper[7845]: I0223 13:01:52.199199 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:52.215439 master-0 kubenswrapper[7845]: W0223 13:01:52.215340 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod81782af1_a026_4c4e_b9b7_6c93eecc8c04.slice/crio-ab47f45771a20b4d43f3495be87b6db9d129d1b8e312eb2a84901852b9ace66c WatchSource:0}: Error finding container ab47f45771a20b4d43f3495be87b6db9d129d1b8e312eb2a84901852b9ace66c: Status 404 returned error can't find the container with id ab47f45771a20b4d43f3495be87b6db9d129d1b8e312eb2a84901852b9ace66c Feb 23 13:01:52.571941 master-0 kubenswrapper[7845]: I0223 13:01:52.569978 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-75bpf"] Feb 23 13:01:52.571941 master-0 kubenswrapper[7845]: I0223 13:01:52.571045 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.624798 master-0 kubenswrapper[7845]: I0223 13:01:52.624751 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.624906 master-0 kubenswrapper[7845]: I0223 13:01:52.624808 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.624906 master-0 kubenswrapper[7845]: I0223 13:01:52.624839 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.624906 master-0 kubenswrapper[7845]: I0223 13:01:52.624858 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.624906 master-0 kubenswrapper[7845]: I0223 13:01:52.624905 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.624922 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.624939 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.624954 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.624978 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4jv\" (UniqueName: \"kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.625011 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625052 master-0 kubenswrapper[7845]: I0223 13:01:52.625040 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625222 master-0 kubenswrapper[7845]: I0223 13:01:52.625086 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625222 master-0 kubenswrapper[7845]: I0223 13:01:52.625116 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.625222 master-0 kubenswrapper[7845]: I0223 13:01:52.625174 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.648344 master-0 kubenswrapper[7845]: I0223 13:01:52.648294 7845 generic.go:334] "Generic (PLEG): container finished" podID="c159d5f4-5c95-4600-80ec-a17a419cfd7a" containerID="6a3071ee7afe1d84c717a0f5829e74858f0e8791b2e3d45c88b0d153dec1ab43" exitCode=0 Feb 23 13:01:52.648455 master-0 kubenswrapper[7845]: I0223 13:01:52.648366 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" event={"ID":"c159d5f4-5c95-4600-80ec-a17a419cfd7a","Type":"ContainerDied","Data":"6a3071ee7afe1d84c717a0f5829e74858f0e8791b2e3d45c88b0d153dec1ab43"} Feb 23 13:01:52.651276 master-0 kubenswrapper[7845]: I0223 13:01:52.651038 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" event={"ID":"cbcca259-0dbf-48ca-bf90-eec638dcdd10","Type":"ContainerStarted","Data":"de1f6719f26795e48d57de2183fb7b98a0933566a76aeeb7a85d7bfb172c9eb4"} Feb 23 13:01:52.651608 master-0 kubenswrapper[7845]: I0223 13:01:52.651579 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:52.653078 master-0 kubenswrapper[7845]: I0223 13:01:52.653033 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" event={"ID":"ee436961-c305-4c84-b4f9-175e1d8004fb","Type":"ContainerStarted","Data":"2b1a45fd6ee377a8067a8087f7d1e5368ed57275c7c5d810570a56331e4cdb31"} Feb 23 13:01:52.654515 master-0 kubenswrapper[7845]: I0223 13:01:52.654482 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerStarted","Data":"e430df40036149c49e2ec2bcef759184c22db256e9c6a2afbd7778eeb4659b79"} Feb 23 13:01:52.656325 master-0 kubenswrapper[7845]: I0223 13:01:52.656062 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:01:52.659613 master-0 kubenswrapper[7845]: I0223 13:01:52.658646 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" event={"ID":"da5d5997-e45f-4858-a9a9-e880bc222caf","Type":"ContainerStarted","Data":"683cdc0fee6b544a3be498a634e1336632426f938865b51d36e3f4e04230192a"} Feb 23 13:01:52.659613 master-0 kubenswrapper[7845]: I0223 13:01:52.658769 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:01:52.663363 master-0 kubenswrapper[7845]: I0223 13:01:52.663322 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" event={"ID":"dcd03d6e-4c8c-400a-8001-343aaeeca93b","Type":"ContainerStarted","Data":"158e1afd791e26b7a7587aaef9543b7419a4a9e119c24e9867350ebe56178a58"} Feb 23 13:01:52.663406 master-0 kubenswrapper[7845]: I0223 13:01:52.663362 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" event={"ID":"dcd03d6e-4c8c-400a-8001-343aaeeca93b","Type":"ContainerStarted","Data":"d573c3e0e8ebb6202d8c5ebe9e0d85b859c5927b89cbdd3a205e10371f242b28"} Feb 23 13:01:52.672265 master-0 kubenswrapper[7845]: I0223 13:01:52.665021 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerStarted","Data":"351e4db24f64009fc4f824529f2660bb02ed2356f12336ec3301a4d672483590"} Feb 23 13:01:52.683425 master-0 kubenswrapper[7845]: I0223 13:01:52.679359 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerStarted","Data":"611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a"} Feb 23 13:01:52.683425 master-0 kubenswrapper[7845]: I0223 13:01:52.680220 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:52.683425 master-0 kubenswrapper[7845]: I0223 13:01:52.682943 7845 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-28zcz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:01:52.683425 master-0 kubenswrapper[7845]: I0223 13:01:52.682983 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" podUID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 23 13:01:52.701462 master-0 kubenswrapper[7845]: I0223 13:01:52.700212 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" event={"ID":"b053c311-07fd-45bb-ab10-6e7b76c9aa48","Type":"ContainerStarted","Data":"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161"} Feb 23 13:01:52.716499 master-0 kubenswrapper[7845]: I0223 13:01:52.712067 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" event={"ID":"08577c3c-73d8-47f4-ba30-aec11af51d40","Type":"ContainerStarted","Data":"a118c5b6698ca46f589d1cf13fb045dbb329861c2606afc83b4e2c5a5551ab11"} Feb 23 13:01:52.717986 master-0 kubenswrapper[7845]: I0223 13:01:52.717577 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" event={"ID":"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c","Type":"ContainerStarted","Data":"49cba424cf2c60e283525bde6160dccd693982c2542843d4d0587d31883af795"} Feb 23 13:01:52.723396 master-0 kubenswrapper[7845]: I0223 13:01:52.719479 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" event={"ID":"c0520301-1a6b-49ca-acca-011692d5b784","Type":"ContainerStarted","Data":"31830e0362f7a4961ccb5574999c9b322d54b8a46c9d7f20c64fbd33df71f3a4"} Feb 23 13:01:52.742290 master-0 kubenswrapper[7845]: I0223 13:01:52.739039 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.742290 master-0 kubenswrapper[7845]: I0223 13:01:52.740631 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2rk" event={"ID":"e7fbab55-8405-44f4-ae2a-412c115ce411","Type":"ContainerStarted","Data":"9b5ef569a5800a57648b96b10226346110388c5669aeb46725a02e0e10d6bc01"} Feb 23 13:01:52.743376 master-0 kubenswrapper[7845]: I0223 13:01:52.742669 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.743376 master-0 kubenswrapper[7845]: I0223 13:01:52.742721 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.743376 master-0 kubenswrapper[7845]: I0223 13:01:52.742760 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.743376 master-0 kubenswrapper[7845]: I0223 13:01:52.743094 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.743376 master-0 kubenswrapper[7845]: I0223 13:01:52.743189 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746374 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746517 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746613 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746620 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746685 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.746882 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747052 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747124 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747203 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747208 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747329 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747393 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4jv\" (UniqueName: \"kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747462 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747481 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.747668 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.748025 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.748191 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.748274 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.751015 master-0 kubenswrapper[7845]: I0223 13:01:52.748692 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.756660 master-0 kubenswrapper[7845]: I0223 13:01:52.756616 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.757498 master-0 kubenswrapper[7845]: I0223 13:01:52.757471 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.767475 master-0 kubenswrapper[7845]: I0223 13:01:52.767438 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"81782af1-a026-4c4e-b9b7-6c93eecc8c04","Type":"ContainerStarted","Data":"45dd4705a999a8e397b9c36c2dd9482e91556aa536c28dc9e2a1340e6b064fe3"} Feb 23 13:01:52.767475 master-0 kubenswrapper[7845]: I0223 13:01:52.767477 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"81782af1-a026-4c4e-b9b7-6c93eecc8c04","Type":"ContainerStarted","Data":"ab47f45771a20b4d43f3495be87b6db9d129d1b8e312eb2a84901852b9ace66c"} Feb 23 13:01:52.788779 master-0 kubenswrapper[7845]: I0223 13:01:52.788741 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4jv\" (UniqueName: \"kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:52.800520 master-0 kubenswrapper[7845]: I0223 13:01:52.800369 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=4.800355549 podStartE2EDuration="4.800355549s" podCreationTimestamp="2026-02-23 13:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:52.800262107 +0000 UTC m=+46.795992978" watchObservedRunningTime="2026-02-23 13:01:52.800355549 +0000 UTC m=+46.796086420" Feb 23 13:01:52.939750 master-0 kubenswrapper[7845]: I0223 13:01:52.939498 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:01:53.017952 master-0 kubenswrapper[7845]: I0223 13:01:53.017901 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rcn5b"] Feb 23 13:01:53.018714 master-0 kubenswrapper[7845]: I0223 13:01:53.018684 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.027893 master-0 kubenswrapper[7845]: I0223 13:01:53.027864 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rcn5b"] Feb 23 13:01:53.039651 master-0 kubenswrapper[7845]: I0223 13:01:53.039610 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:01:53.040115 master-0 kubenswrapper[7845]: I0223 13:01:53.039917 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:01:53.040195 master-0 kubenswrapper[7845]: I0223 13:01:53.040165 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:01:53.040364 master-0 kubenswrapper[7845]: I0223 13:01:53.040344 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:01:53.080315 master-0 kubenswrapper[7845]: I0223 13:01:53.076978 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.080315 master-0 kubenswrapper[7845]: I0223 13:01:53.077037 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cx9\" (UniqueName: \"kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.080315 master-0 kubenswrapper[7845]: I0223 13:01:53.077075 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.178009 master-0 kubenswrapper[7845]: I0223 13:01:53.177791 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.178009 master-0 kubenswrapper[7845]: I0223 13:01:53.177859 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.178009 master-0 kubenswrapper[7845]: I0223 13:01:53.177895 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cx9\" (UniqueName: \"kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.178280 master-0 kubenswrapper[7845]: E0223 13:01:53.178119 7845 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 23 13:01:53.178280 master-0 kubenswrapper[7845]: E0223 13:01:53.178155 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls podName:39ae352f-b9e3-4bbc-b59b-9fa92c7bc714 nodeName:}" failed. No retries permitted until 2026-02-23 13:01:53.678142375 +0000 UTC m=+47.673873236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls") pod "dns-default-rcn5b" (UID: "39ae352f-b9e3-4bbc-b59b-9fa92c7bc714") : secret "dns-default-metrics-tls" not found Feb 23 13:01:53.178876 master-0 kubenswrapper[7845]: I0223 13:01:53.178845 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.203262 master-0 kubenswrapper[7845]: I0223 13:01:53.201449 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cx9\" (UniqueName: \"kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.440532 master-0 kubenswrapper[7845]: I0223 13:01:53.439373 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bq97v"] Feb 23 13:01:53.440532 master-0 kubenswrapper[7845]: I0223 13:01:53.440168 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.490270 master-0 kubenswrapper[7845]: I0223 13:01:53.490190 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbml7\" (UniqueName: \"kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.490270 master-0 kubenswrapper[7845]: I0223 13:01:53.490233 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.592134 master-0 kubenswrapper[7845]: I0223 13:01:53.591970 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbml7\" (UniqueName: \"kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.592134 master-0 kubenswrapper[7845]: I0223 13:01:53.592019 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.592391 master-0 kubenswrapper[7845]: I0223 13:01:53.592196 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.617815 master-0 kubenswrapper[7845]: I0223 13:01:53.617523 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbml7\" (UniqueName: \"kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.696294 master-0 kubenswrapper[7845]: I0223 13:01:53.696220 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.700060 master-0 kubenswrapper[7845]: I0223 13:01:53.700023 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.732272 master-0 kubenswrapper[7845]: I0223 13:01:53.731678 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:01:53.757521 master-0 kubenswrapper[7845]: I0223 13:01:53.757441 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bq97v" Feb 23 13:01:53.791548 master-0 kubenswrapper[7845]: I0223 13:01:53.791507 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerStarted","Data":"66d7b9b29d7eeeb9236a56c762cde3c1a65c77718df7cdff3b00efe2346c3dc9"} Feb 23 13:01:53.794439 master-0 kubenswrapper[7845]: I0223 13:01:53.794418 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" event={"ID":"c159d5f4-5c95-4600-80ec-a17a419cfd7a","Type":"ContainerStarted","Data":"640b9e743701a3df59039841b7ffd17770a70e2fac9c95719ff9a123a069dfd5"} Feb 23 13:01:53.794528 master-0 kubenswrapper[7845]: I0223 13:01:53.794440 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" event={"ID":"c159d5f4-5c95-4600-80ec-a17a419cfd7a","Type":"ContainerStarted","Data":"fae9e85b25816e4b85785ba4d6364bca09c5887d627f24e1e981782aef086928"} Feb 23 13:01:53.795724 master-0 kubenswrapper[7845]: I0223 13:01:53.795681 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-75bpf" event={"ID":"34ad2537-b5fe-463f-8e95-f47cc886aa5e","Type":"ContainerStarted","Data":"2aa9f7bdb0fb816f035a29f4a4d1116082cdb63c2de7e836a14867eb42892fb1"} Feb 23 13:01:53.795776 master-0 kubenswrapper[7845]: I0223 13:01:53.795724 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-75bpf" event={"ID":"34ad2537-b5fe-463f-8e95-f47cc886aa5e","Type":"ContainerStarted","Data":"bfac2b9796ec90c809ea45ae3db77ba447372f61cad1fc80f3fc96fc4ec3cf21"} Feb 23 13:01:53.798106 master-0 kubenswrapper[7845]: I0223 13:01:53.797685 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" event={"ID":"08577c3c-73d8-47f4-ba30-aec11af51d40","Type":"ContainerStarted","Data":"fb5dec491f2c88065afefc1d91f7d52343f0bf8b8b41cbc669cfd2374b4c4730"} Feb 23 13:01:53.817493 master-0 kubenswrapper[7845]: I0223 13:01:53.815945 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2rk" event={"ID":"e7fbab55-8405-44f4-ae2a-412c115ce411","Type":"ContainerStarted","Data":"83c821b8ced21853e96b2ec61351236ad07719364b41ca4d37b043893c0106d7"} Feb 23 13:01:53.826665 master-0 kubenswrapper[7845]: I0223 13:01:53.824420 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:01:54.406432 master-0 kubenswrapper[7845]: I0223 13:01:54.405583 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rcn5b"] Feb 23 13:01:54.860621 master-0 kubenswrapper[7845]: I0223 13:01:54.860570 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bq97v" event={"ID":"031016de-897e-42bc-9de4-843122f64a75","Type":"ContainerStarted","Data":"5fc0c0f7e6489395110a0001c6864c3ed4744e913c12e2dea4418b4d64463c4d"} Feb 23 13:01:54.860807 master-0 kubenswrapper[7845]: I0223 13:01:54.860641 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bq97v" event={"ID":"031016de-897e-42bc-9de4-843122f64a75","Type":"ContainerStarted","Data":"3f2f8ec2305a812ab189524192ed5bf86a7bba7a6b18ab8873a325d48aca12f0"} Feb 23 13:01:54.862558 master-0 kubenswrapper[7845]: I0223 13:01:54.862502 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rcn5b" event={"ID":"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714","Type":"ContainerStarted","Data":"e8b057f2132ff258b6f72db6a015d3a5562051b7f885529a6871d5a5d46fff27"} Feb 23 13:01:56.014274 master-0 kubenswrapper[7845]: I0223 13:01:56.007462 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" podStartSLOduration=7.412432682 podStartE2EDuration="21.007441789s" podCreationTimestamp="2026-02-23 13:01:35 +0000 UTC" firstStartedPulling="2026-02-23 13:01:38.263178832 +0000 UTC m=+32.258909743" lastFinishedPulling="2026-02-23 13:01:51.858187979 +0000 UTC m=+45.853918850" observedRunningTime="2026-02-23 13:01:55.557682969 +0000 UTC m=+49.553413860" watchObservedRunningTime="2026-02-23 13:01:56.007441789 +0000 UTC m=+50.003172660" Feb 23 13:01:56.268205 master-0 kubenswrapper[7845]: I0223 13:01:56.268036 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:56.269148 master-0 kubenswrapper[7845]: I0223 13:01:56.269102 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.269716 master-0 kubenswrapper[7845]: I0223 13:01:56.269654 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:56.270420 master-0 kubenswrapper[7845]: I0223 13:01:56.270382 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.271705 master-0 kubenswrapper[7845]: I0223 13:01:56.271664 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:01:56.271949 master-0 kubenswrapper[7845]: I0223 13:01:56.271896 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 23 13:01:56.351793 master-0 kubenswrapper[7845]: I0223 13:01:56.351729 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.351997 master-0 kubenswrapper[7845]: I0223 13:01:56.351841 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.351997 master-0 kubenswrapper[7845]: I0223 13:01:56.351893 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.351997 master-0 kubenswrapper[7845]: I0223 13:01:56.351976 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.352213 master-0 kubenswrapper[7845]: I0223 13:01:56.352006 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.352213 master-0 kubenswrapper[7845]: I0223 13:01:56.352144 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.454415 master-0 kubenswrapper[7845]: I0223 13:01:56.454353 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.454665 master-0 kubenswrapper[7845]: I0223 13:01:56.454527 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.454665 master-0 kubenswrapper[7845]: I0223 13:01:56.454600 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.454665 master-0 kubenswrapper[7845]: I0223 13:01:56.454650 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.454776 master-0 kubenswrapper[7845]: I0223 13:01:56.454670 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.454808 master-0 kubenswrapper[7845]: I0223 13:01:56.454778 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.454808 master-0 kubenswrapper[7845]: I0223 13:01:56.454802 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.454865 master-0 kubenswrapper[7845]: I0223 13:01:56.454821 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.455011 master-0 kubenswrapper[7845]: I0223 13:01:56.454957 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:56.455402 master-0 kubenswrapper[7845]: I0223 13:01:56.455382 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:56.556086 master-0 kubenswrapper[7845]: I0223 13:01:56.555911 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") pod \"controller-manager-7cc4b4775-6vdrk\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:01:56.556323 master-0 kubenswrapper[7845]: E0223 13:01:56.556111 7845 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:56.556323 master-0 kubenswrapper[7845]: E0223 13:01:56.556179 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca podName:fa598633-68d2-48e5-9e8c-fdbbb1fb54d7 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:12.556158872 +0000 UTC m=+66.551889743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca") pod "controller-manager-7cc4b4775-6vdrk" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7") : configmap "client-ca" not found Feb 23 13:01:57.613396 master-0 kubenswrapper[7845]: I0223 13:01:57.613294 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:57.620231 master-0 kubenswrapper[7845]: I0223 13:01:57.620164 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:57.946624 master-0 kubenswrapper[7845]: I0223 13:01:57.946138 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:57.946884 master-0 kubenswrapper[7845]: I0223 13:01:57.946720 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: I0223 13:01:58.283450 7845 patch_prober.go:28] interesting pod/apiserver-6dcf85cb46-cmf75 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]log ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]etcd ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/generic-apiserver-start-informers ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/max-in-flight-filter ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/project.openshift.io-projectcache ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/openshift.io-startinformers ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: livez check failed Feb 23 13:01:58.283645 master-0 kubenswrapper[7845]: I0223 13:01:58.283584 7845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" podUID="c159d5f4-5c95-4600-80ec-a17a419cfd7a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:01:58.300279 master-0 kubenswrapper[7845]: I0223 13:01:58.298264 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-75bpf" podStartSLOduration=6.298222851 podStartE2EDuration="6.298222851s" podCreationTimestamp="2026-02-23 13:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:01:57.654077925 +0000 UTC m=+51.649808796" watchObservedRunningTime="2026-02-23 13:01:58.298222851 +0000 UTC m=+52.293953732" Feb 23 13:01:58.300279 master-0 kubenswrapper[7845]: I0223 13:01:58.298652 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:01:58.300279 master-0 kubenswrapper[7845]: I0223 13:01:58.298889 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" containerName="installer" containerID="cri-o://45dd4705a999a8e397b9c36c2dd9482e91556aa536c28dc9e2a1340e6b064fe3" gracePeriod=30 Feb 23 13:01:58.336406 master-0 kubenswrapper[7845]: I0223 13:01:58.336352 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:58.342028 master-0 kubenswrapper[7845]: I0223 13:01:58.341974 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:58.389736 master-0 kubenswrapper[7845]: I0223 13:01:58.389650 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:01:58.412309 master-0 kubenswrapper[7845]: I0223 13:01:58.412280 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:01:58.886149 master-0 kubenswrapper[7845]: I0223 13:01:58.885983 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_81782af1-a026-4c4e-b9b7-6c93eecc8c04/installer/0.log" Feb 23 13:01:58.886149 master-0 kubenswrapper[7845]: I0223 13:01:58.886037 7845 generic.go:334] "Generic (PLEG): container finished" podID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" containerID="45dd4705a999a8e397b9c36c2dd9482e91556aa536c28dc9e2a1340e6b064fe3" exitCode=1 Feb 23 13:01:58.887042 master-0 kubenswrapper[7845]: I0223 13:01:58.886156 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"81782af1-a026-4c4e-b9b7-6c93eecc8c04","Type":"ContainerDied","Data":"45dd4705a999a8e397b9c36c2dd9482e91556aa536c28dc9e2a1340e6b064fe3"} Feb 23 13:01:59.005804 master-0 kubenswrapper[7845]: I0223 13:01:59.005737 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") pod \"route-controller-manager-859cf5fcc7-lmnw2\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:01:59.006197 master-0 kubenswrapper[7845]: E0223 13:01:59.006124 7845 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 23 13:01:59.006324 master-0 kubenswrapper[7845]: E0223 13:01:59.006293 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca podName:a91c01d9-2bc7-4534-9634-52b841ce3e0c nodeName:}" failed. No retries permitted until 2026-02-23 13:02:15.006226858 +0000 UTC m=+69.001957909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca") pod "route-controller-manager-859cf5fcc7-lmnw2" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c") : configmap "client-ca" not found Feb 23 13:01:59.923818 master-0 kubenswrapper[7845]: I0223 13:01:59.921772 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 23 13:01:59.927233 master-0 kubenswrapper[7845]: I0223 13:01:59.924626 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:01:59.948513 master-0 kubenswrapper[7845]: W0223 13:01:59.948446 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05bbed42_d2a0_4d6c_a25f_0d75a37dbab0.slice/crio-3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec WatchSource:0}: Error finding container 3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec: Status 404 returned error can't find the container with id 3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec Feb 23 13:01:59.972812 master-0 kubenswrapper[7845]: I0223 13:01:59.972781 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_81782af1-a026-4c4e-b9b7-6c93eecc8c04/installer/0.log" Feb 23 13:01:59.972888 master-0 kubenswrapper[7845]: I0223 13:01:59.972867 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:02:00.131166 master-0 kubenswrapper[7845]: I0223 13:02:00.131099 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir\") pod \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " Feb 23 13:02:00.131391 master-0 kubenswrapper[7845]: I0223 13:02:00.131195 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock\") pod \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " Feb 23 13:02:00.131391 master-0 kubenswrapper[7845]: I0223 13:02:00.131250 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access\") pod \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\" (UID: \"81782af1-a026-4c4e-b9b7-6c93eecc8c04\") " Feb 23 13:02:00.131391 master-0 kubenswrapper[7845]: I0223 13:02:00.131344 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81782af1-a026-4c4e-b9b7-6c93eecc8c04" (UID: "81782af1-a026-4c4e-b9b7-6c93eecc8c04"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:00.131489 master-0 kubenswrapper[7845]: I0223 13:02:00.131360 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock" (OuterVolumeSpecName: "var-lock") pod "81782af1-a026-4c4e-b9b7-6c93eecc8c04" (UID: "81782af1-a026-4c4e-b9b7-6c93eecc8c04"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:00.131526 master-0 kubenswrapper[7845]: I0223 13:02:00.131500 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:00.131526 master-0 kubenswrapper[7845]: I0223 13:02:00.131514 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:00.135836 master-0 kubenswrapper[7845]: I0223 13:02:00.135810 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81782af1-a026-4c4e-b9b7-6c93eecc8c04" (UID: "81782af1-a026-4c4e-b9b7-6c93eecc8c04"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:00.235802 master-0 kubenswrapper[7845]: I0223 13:02:00.235758 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81782af1-a026-4c4e-b9b7-6c93eecc8c04-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:00.348644 master-0 kubenswrapper[7845]: I0223 13:02:00.347582 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:00.348644 master-0 kubenswrapper[7845]: E0223 13:02:00.347828 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" containerName="installer" Feb 23 13:02:00.348644 master-0 kubenswrapper[7845]: I0223 13:02:00.347840 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" containerName="installer" Feb 23 13:02:00.348644 master-0 kubenswrapper[7845]: I0223 13:02:00.347916 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" containerName="installer" Feb 23 13:02:00.348644 master-0 kubenswrapper[7845]: I0223 13:02:00.348279 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.438877 master-0 kubenswrapper[7845]: I0223 13:02:00.438786 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.439054 master-0 kubenswrapper[7845]: I0223 13:02:00.438947 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.439144 master-0 kubenswrapper[7845]: I0223 13:02:00.439104 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.540992 master-0 kubenswrapper[7845]: I0223 13:02:00.540936 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.541288 master-0 kubenswrapper[7845]: I0223 13:02:00.541012 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.541288 master-0 kubenswrapper[7845]: I0223 13:02:00.541042 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.541288 master-0 kubenswrapper[7845]: I0223 13:02:00.541104 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.541419 master-0 kubenswrapper[7845]: I0223 13:02:00.541316 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.624883 master-0 kubenswrapper[7845]: I0223 13:02:00.624786 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:00.670434 master-0 kubenswrapper[7845]: I0223 13:02:00.670378 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.810200 master-0 kubenswrapper[7845]: I0223 13:02:00.810167 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:00.923341 master-0 kubenswrapper[7845]: I0223 13:02:00.922718 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_81782af1-a026-4c4e-b9b7-6c93eecc8c04/installer/0.log" Feb 23 13:02:00.923341 master-0 kubenswrapper[7845]: I0223 13:02:00.922879 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 23 13:02:00.923752 master-0 kubenswrapper[7845]: I0223 13:02:00.923454 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"81782af1-a026-4c4e-b9b7-6c93eecc8c04","Type":"ContainerDied","Data":"ab47f45771a20b4d43f3495be87b6db9d129d1b8e312eb2a84901852b9ace66c"} Feb 23 13:02:00.923752 master-0 kubenswrapper[7845]: I0223 13:02:00.923516 7845 scope.go:117] "RemoveContainer" containerID="45dd4705a999a8e397b9c36c2dd9482e91556aa536c28dc9e2a1340e6b064fe3" Feb 23 13:02:00.925788 master-0 kubenswrapper[7845]: I0223 13:02:00.925199 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0","Type":"ContainerStarted","Data":"22927b186dd20d4435230884e99b7e79937083b7c678e2250219b649223f7070"} Feb 23 13:02:00.925788 master-0 kubenswrapper[7845]: I0223 13:02:00.925226 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0","Type":"ContainerStarted","Data":"3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec"} Feb 23 13:02:00.927554 master-0 kubenswrapper[7845]: I0223 13:02:00.927499 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef","Type":"ContainerStarted","Data":"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c"} Feb 23 13:02:00.927554 master-0 kubenswrapper[7845]: I0223 13:02:00.927531 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef","Type":"ContainerStarted","Data":"e51638a9727e021593fede6d0ca2aff58505a6ad0f7e8362eee4ed83b891da4a"} Feb 23 13:02:00.931057 master-0 kubenswrapper[7845]: I0223 13:02:00.930981 7845 generic.go:334] "Generic (PLEG): container finished" podID="c0520301-1a6b-49ca-acca-011692d5b784" containerID="f52728fcdc20113e5e153a7f773c95297fdf5d76daa1b4959be887f3eec9a44d" exitCode=0 Feb 23 13:02:00.931057 master-0 kubenswrapper[7845]: I0223 13:02:00.931044 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" event={"ID":"c0520301-1a6b-49ca-acca-011692d5b784","Type":"ContainerDied","Data":"f52728fcdc20113e5e153a7f773c95297fdf5d76daa1b4959be887f3eec9a44d"} Feb 23 13:02:01.296187 master-0 kubenswrapper[7845]: I0223 13:02:01.293794 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bq97v" podStartSLOduration=8.293773726 podStartE2EDuration="8.293773726s" podCreationTimestamp="2026-02-23 13:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:01.021217607 +0000 UTC m=+55.016948488" watchObservedRunningTime="2026-02-23 13:02:01.293773726 +0000 UTC m=+55.289504597" Feb 23 13:02:01.353791 master-0 kubenswrapper[7845]: I0223 13:02:01.353629 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:01.377743 master-0 kubenswrapper[7845]: I0223 13:02:01.376255 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:02:01.401695 master-0 kubenswrapper[7845]: I0223 13:02:01.395453 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 23 13:02:01.439626 master-0 kubenswrapper[7845]: I0223 13:02:01.439169 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=5.439150618 podStartE2EDuration="5.439150618s" podCreationTimestamp="2026-02-23 13:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:01.437567702 +0000 UTC m=+55.433298583" watchObservedRunningTime="2026-02-23 13:02:01.439150618 +0000 UTC m=+55.434881489" Feb 23 13:02:01.516356 master-0 kubenswrapper[7845]: I0223 13:02:01.516059 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc4b4775-6vdrk"] Feb 23 13:02:01.516587 master-0 kubenswrapper[7845]: E0223 13:02:01.516437 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" podUID="fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" Feb 23 13:02:01.576960 master-0 kubenswrapper[7845]: I0223 13:02:01.576827 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2"] Feb 23 13:02:01.578980 master-0 kubenswrapper[7845]: E0223 13:02:01.577166 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" podUID="a91c01d9-2bc7-4534-9634-52b841ce3e0c" Feb 23 13:02:01.938574 master-0 kubenswrapper[7845]: I0223 13:02:01.938520 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:02:01.939714 master-0 kubenswrapper[7845]: I0223 13:02:01.939681 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:02:01.950321 master-0 kubenswrapper[7845]: I0223 13:02:01.950282 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:02:01.956919 master-0 kubenswrapper[7845]: I0223 13:02:01.956871 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:02:02.008397 master-0 kubenswrapper[7845]: I0223 13:02:02.008297 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=6.008272324 podStartE2EDuration="6.008272324s" podCreationTimestamp="2026-02-23 13:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:01.998472015 +0000 UTC m=+55.994202886" watchObservedRunningTime="2026-02-23 13:02:02.008272324 +0000 UTC m=+56.004003215" Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065043 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert\") pod \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065108 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config\") pod \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065146 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert\") pod \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065162 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles\") pod \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065193 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2gm\" (UniqueName: \"kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm\") pod \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\" (UID: \"fa598633-68d2-48e5-9e8c-fdbbb1fb54d7\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065215 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frbw\" (UniqueName: \"kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw\") pod \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065304 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config\") pod \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\" (UID: \"a91c01d9-2bc7-4534-9634-52b841ce3e0c\") " Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065905 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.065993 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config" (OuterVolumeSpecName: "config") pod "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:02.067266 master-0 kubenswrapper[7845]: I0223 13:02:02.066440 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config" (OuterVolumeSpecName: "config") pod "a91c01d9-2bc7-4534-9634-52b841ce3e0c" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:02.069748 master-0 kubenswrapper[7845]: I0223 13:02:02.069707 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:02:02.070060 master-0 kubenswrapper[7845]: I0223 13:02:02.070021 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw" (OuterVolumeSpecName: "kube-api-access-9frbw") pod "a91c01d9-2bc7-4534-9634-52b841ce3e0c" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c"). InnerVolumeSpecName "kube-api-access-9frbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:02.071210 master-0 kubenswrapper[7845]: I0223 13:02:02.071111 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm" (OuterVolumeSpecName: "kube-api-access-8f2gm") pod "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" (UID: "fa598633-68d2-48e5-9e8c-fdbbb1fb54d7"). InnerVolumeSpecName "kube-api-access-8f2gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:02.074296 master-0 kubenswrapper[7845]: I0223 13:02:02.072794 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a91c01d9-2bc7-4534-9634-52b841ce3e0c" (UID: "a91c01d9-2bc7-4534-9634-52b841ce3e0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:02:02.166408 master-0 kubenswrapper[7845]: I0223 13:02:02.166356 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2gm\" (UniqueName: \"kubernetes.io/projected/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-kube-api-access-8f2gm\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166408 master-0 kubenswrapper[7845]: I0223 13:02:02.166401 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frbw\" (UniqueName: \"kubernetes.io/projected/a91c01d9-2bc7-4534-9634-52b841ce3e0c-kube-api-access-9frbw\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166408 master-0 kubenswrapper[7845]: I0223 13:02:02.166415 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166648 master-0 kubenswrapper[7845]: I0223 13:02:02.166426 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91c01d9-2bc7-4534-9634-52b841ce3e0c-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166648 master-0 kubenswrapper[7845]: I0223 13:02:02.166439 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166648 master-0 kubenswrapper[7845]: I0223 13:02:02.166450 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.166648 master-0 kubenswrapper[7845]: I0223 13:02:02.166461 7845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:02.221181 master-0 kubenswrapper[7845]: I0223 13:02:02.221065 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81782af1-a026-4c4e-b9b7-6c93eecc8c04" path="/var/lib/kubelet/pods/81782af1-a026-4c4e-b9b7-6c93eecc8c04/volumes" Feb 23 13:02:02.949211 master-0 kubenswrapper[7845]: I0223 13:02:02.949177 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2" Feb 23 13:02:02.949950 master-0 kubenswrapper[7845]: I0223 13:02:02.949749 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"fb6381fd-efdb-4a38-956c-e057e695717c","Type":"ContainerStarted","Data":"033b5b43a8ce5215c53bacbd96dbd0c37432004ba2cd6027cd1d0098dbea988f"} Feb 23 13:02:02.950033 master-0 kubenswrapper[7845]: I0223 13:02:02.950003 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc4b4775-6vdrk" Feb 23 13:02:02.959369 master-0 kubenswrapper[7845]: I0223 13:02:02.959329 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:02:02.967260 master-0 kubenswrapper[7845]: I0223 13:02:02.967215 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:02:02.996360 master-0 kubenswrapper[7845]: I0223 13:02:02.996292 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2"] Feb 23 13:02:02.999332 master-0 kubenswrapper[7845]: I0223 13:02:02.999299 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:02.999929 master-0 kubenswrapper[7845]: I0223 13:02:02.999892 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.006331 master-0 kubenswrapper[7845]: I0223 13:02:03.006288 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:03.006621 master-0 kubenswrapper[7845]: I0223 13:02:03.006590 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:02:03.006848 master-0 kubenswrapper[7845]: I0223 13:02:03.006818 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:02:03.007096 master-0 kubenswrapper[7845]: I0223 13:02:03.007064 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:02:03.007406 master-0 kubenswrapper[7845]: I0223 13:02:03.007373 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:03.024347 master-0 kubenswrapper[7845]: I0223 13:02:03.009936 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859cf5fcc7-lmnw2"] Feb 23 13:02:03.024347 master-0 kubenswrapper[7845]: I0223 13:02:03.021870 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:03.081895 master-0 kubenswrapper[7845]: I0223 13:02:03.081796 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckpk\" (UniqueName: \"kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.082034 master-0 kubenswrapper[7845]: I0223 13:02:03.081939 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.082074 master-0 kubenswrapper[7845]: I0223 13:02:03.082030 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.082102 master-0 kubenswrapper[7845]: I0223 13:02:03.082079 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.092070 master-0 kubenswrapper[7845]: I0223 13:02:03.092026 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc4b4775-6vdrk"] Feb 23 13:02:03.101853 master-0 kubenswrapper[7845]: I0223 13:02:03.101798 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cc4b4775-6vdrk"] Feb 23 13:02:03.187075 master-0 kubenswrapper[7845]: I0223 13:02:03.187031 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.187305 master-0 kubenswrapper[7845]: I0223 13:02:03.187095 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.187305 master-0 kubenswrapper[7845]: I0223 13:02:03.187134 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.187305 master-0 kubenswrapper[7845]: I0223 13:02:03.187165 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckpk\" (UniqueName: \"kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.187305 master-0 kubenswrapper[7845]: I0223 13:02:03.187201 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91c01d9-2bc7-4534-9634-52b841ce3e0c-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:03.187305 master-0 kubenswrapper[7845]: I0223 13:02:03.187217 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:03.188204 master-0 kubenswrapper[7845]: I0223 13:02:03.188042 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.189076 master-0 kubenswrapper[7845]: I0223 13:02:03.189049 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.197290 master-0 kubenswrapper[7845]: I0223 13:02:03.191766 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.227781 master-0 kubenswrapper[7845]: I0223 13:02:03.227740 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckpk\" (UniqueName: \"kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk\") pod \"route-controller-manager-bc7b979c6-vb276\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.360715 master-0 kubenswrapper[7845]: I0223 13:02:03.360597 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:03.528710 master-0 kubenswrapper[7845]: I0223 13:02:03.528627 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:03.740945 master-0 kubenswrapper[7845]: I0223 13:02:03.740890 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:03.965226 master-0 kubenswrapper[7845]: I0223 13:02:03.965135 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rcn5b" event={"ID":"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714","Type":"ContainerStarted","Data":"0bf9d2fc575890ff6a523525512f078dd3fe88615ceafe6c1f7767d33f223b9d"} Feb 23 13:02:03.965226 master-0 kubenswrapper[7845]: I0223 13:02:03.965228 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rcn5b" event={"ID":"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714","Type":"ContainerStarted","Data":"d7ea9d3d3c92916472b0d913cad106f72dbc967edb568343bd92fff5ffb829ea"} Feb 23 13:02:03.966224 master-0 kubenswrapper[7845]: I0223 13:02:03.965383 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:02:03.967078 master-0 kubenswrapper[7845]: I0223 13:02:03.967012 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"fb6381fd-efdb-4a38-956c-e057e695717c","Type":"ContainerStarted","Data":"0916663f950d4a26eb9a051e3ef3ec191491347819daffe37ee8c82087aaed05"} Feb 23 13:02:03.973178 master-0 kubenswrapper[7845]: I0223 13:02:03.973065 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" event={"ID":"c0520301-1a6b-49ca-acca-011692d5b784","Type":"ContainerStarted","Data":"fbdc35613d7e4d7c94e9d3afa63f445f1ddc0b78094b5bca377d4217af496131"} Feb 23 13:02:03.974285 master-0 kubenswrapper[7845]: I0223 13:02:03.974203 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" event={"ID":"c5c92f94-4bf1-43d3-8409-e816c8247ad8","Type":"ContainerStarted","Data":"66b05e320d8f83354dceb975a24a63b4857505d935ba05a8e6ca9c2b6a0ccca5"} Feb 23 13:02:03.990647 master-0 kubenswrapper[7845]: I0223 13:02:03.990530 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rcn5b" podStartSLOduration=3.820687899 podStartE2EDuration="11.990505737s" podCreationTimestamp="2026-02-23 13:01:52 +0000 UTC" firstStartedPulling="2026-02-23 13:01:54.451902977 +0000 UTC m=+48.447633858" lastFinishedPulling="2026-02-23 13:02:02.621720815 +0000 UTC m=+56.617451696" observedRunningTime="2026-02-23 13:02:03.98856978 +0000 UTC m=+57.984300711" watchObservedRunningTime="2026-02-23 13:02:03.990505737 +0000 UTC m=+57.986236638" Feb 23 13:02:04.013909 master-0 kubenswrapper[7845]: I0223 13:02:04.013522 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=4.013490374 podStartE2EDuration="4.013490374s" podCreationTimestamp="2026-02-23 13:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:04.011220657 +0000 UTC m=+58.006951588" watchObservedRunningTime="2026-02-23 13:02:04.013490374 +0000 UTC m=+58.009221325" Feb 23 13:02:04.212155 master-0 kubenswrapper[7845]: I0223 13:02:04.212084 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91c01d9-2bc7-4534-9634-52b841ce3e0c" path="/var/lib/kubelet/pods/a91c01d9-2bc7-4534-9634-52b841ce3e0c/volumes" Feb 23 13:02:04.212862 master-0 kubenswrapper[7845]: I0223 13:02:04.212818 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa598633-68d2-48e5-9e8c-fdbbb1fb54d7" path="/var/lib/kubelet/pods/fa598633-68d2-48e5-9e8c-fdbbb1fb54d7/volumes" Feb 23 13:02:04.465305 master-0 kubenswrapper[7845]: I0223 13:02:04.457976 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" podStartSLOduration=13.518013144 podStartE2EDuration="21.457952647s" podCreationTimestamp="2026-02-23 13:01:43 +0000 UTC" firstStartedPulling="2026-02-23 13:01:52.211506931 +0000 UTC m=+46.207237802" lastFinishedPulling="2026-02-23 13:02:00.151446434 +0000 UTC m=+54.147177305" observedRunningTime="2026-02-23 13:02:04.040583112 +0000 UTC m=+58.036314013" watchObservedRunningTime="2026-02-23 13:02:04.457952647 +0000 UTC m=+58.453683528" Feb 23 13:02:04.465305 master-0 kubenswrapper[7845]: I0223 13:02:04.459630 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:02:04.465305 master-0 kubenswrapper[7845]: I0223 13:02:04.460258 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.468461 master-0 kubenswrapper[7845]: I0223 13:02:04.468410 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:02:04.479761 master-0 kubenswrapper[7845]: I0223 13:02:04.479284 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:02:04.615811 master-0 kubenswrapper[7845]: I0223 13:02:04.615721 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.616059 master-0 kubenswrapper[7845]: I0223 13:02:04.615917 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.616059 master-0 kubenswrapper[7845]: I0223 13:02:04.616011 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.720640 master-0 kubenswrapper[7845]: I0223 13:02:04.720190 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.720640 master-0 kubenswrapper[7845]: I0223 13:02:04.720482 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.720640 master-0 kubenswrapper[7845]: I0223 13:02:04.720579 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.726953 master-0 kubenswrapper[7845]: I0223 13:02:04.726907 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.727026 master-0 kubenswrapper[7845]: I0223 13:02:04.726968 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.752368 master-0 kubenswrapper[7845]: I0223 13:02:04.752316 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.794764 master-0 kubenswrapper[7845]: I0223 13:02:04.794715 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:04.985436 master-0 kubenswrapper[7845]: I0223 13:02:04.985309 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="fb6381fd-efdb-4a38-956c-e057e695717c" containerName="installer" containerID="cri-o://0916663f950d4a26eb9a051e3ef3ec191491347819daffe37ee8c82087aaed05" gracePeriod=30 Feb 23 13:02:05.208705 master-0 kubenswrapper[7845]: I0223 13:02:05.208492 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:05.209271 master-0 kubenswrapper[7845]: I0223 13:02:05.209254 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.213733 master-0 kubenswrapper[7845]: I0223 13:02:05.213692 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:05.213810 master-0 kubenswrapper[7845]: I0223 13:02:05.213779 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:02:05.214265 master-0 kubenswrapper[7845]: I0223 13:02:05.214185 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:05.214760 master-0 kubenswrapper[7845]: I0223 13:02:05.214732 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:02:05.221827 master-0 kubenswrapper[7845]: I0223 13:02:05.221725 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:02:05.230757 master-0 kubenswrapper[7845]: I0223 13:02:05.230675 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:05.230935 master-0 kubenswrapper[7845]: I0223 13:02:05.230799 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:02:05.258817 master-0 kubenswrapper[7845]: I0223 13:02:05.257964 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 23 13:02:05.276482 master-0 kubenswrapper[7845]: W0223 13:02:05.276409 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod04a14e09_67c1_45e9_af34_bccb2fe3757e.slice/crio-c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae WatchSource:0}: Error finding container c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae: Status 404 returned error can't find the container with id c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae Feb 23 13:02:05.340747 master-0 kubenswrapper[7845]: I0223 13:02:05.340662 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmzp\" (UniqueName: \"kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.340747 master-0 kubenswrapper[7845]: I0223 13:02:05.340733 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.340991 master-0 kubenswrapper[7845]: I0223 13:02:05.340898 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.340991 master-0 kubenswrapper[7845]: I0223 13:02:05.340930 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.341257 master-0 kubenswrapper[7845]: I0223 13:02:05.341190 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.442481 master-0 kubenswrapper[7845]: I0223 13:02:05.442423 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmzp\" (UniqueName: \"kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.442674 master-0 kubenswrapper[7845]: I0223 13:02:05.442493 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.442674 master-0 kubenswrapper[7845]: I0223 13:02:05.442599 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.442860 master-0 kubenswrapper[7845]: I0223 13:02:05.442817 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.442918 master-0 kubenswrapper[7845]: I0223 13:02:05.442884 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.443634 master-0 kubenswrapper[7845]: I0223 13:02:05.443602 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.443839 master-0 kubenswrapper[7845]: I0223 13:02:05.443811 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.444318 master-0 kubenswrapper[7845]: I0223 13:02:05.444288 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.446980 master-0 kubenswrapper[7845]: I0223 13:02:05.446949 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.461512 master-0 kubenswrapper[7845]: I0223 13:02:05.461464 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmzp\" (UniqueName: \"kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp\") pod \"controller-manager-8457dbd4bb-hmgzl\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.524937 master-0 kubenswrapper[7845]: I0223 13:02:05.524817 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:05.997541 master-0 kubenswrapper[7845]: I0223 13:02:05.997493 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_fb6381fd-efdb-4a38-956c-e057e695717c/installer/0.log" Feb 23 13:02:05.998349 master-0 kubenswrapper[7845]: I0223 13:02:05.997564 7845 generic.go:334] "Generic (PLEG): container finished" podID="fb6381fd-efdb-4a38-956c-e057e695717c" containerID="0916663f950d4a26eb9a051e3ef3ec191491347819daffe37ee8c82087aaed05" exitCode=1 Feb 23 13:02:05.998349 master-0 kubenswrapper[7845]: I0223 13:02:05.997705 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"fb6381fd-efdb-4a38-956c-e057e695717c","Type":"ContainerDied","Data":"0916663f950d4a26eb9a051e3ef3ec191491347819daffe37ee8c82087aaed05"} Feb 23 13:02:06.001678 master-0 kubenswrapper[7845]: I0223 13:02:06.001625 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"04a14e09-67c1-45e9-af34-bccb2fe3757e","Type":"ContainerStarted","Data":"88e0e24f4f045d3a42d1ee4cfb99a951aeace5cf2e7bece4bd5f41827f8965f5"} Feb 23 13:02:06.001678 master-0 kubenswrapper[7845]: I0223 13:02:06.001675 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"04a14e09-67c1-45e9-af34-bccb2fe3757e","Type":"ContainerStarted","Data":"c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae"} Feb 23 13:02:06.030321 master-0 kubenswrapper[7845]: I0223 13:02:06.026018 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.02599462 podStartE2EDuration="2.02599462s" podCreationTimestamp="2026-02-23 13:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:06.023915129 +0000 UTC m=+60.019646030" watchObservedRunningTime="2026-02-23 13:02:06.02599462 +0000 UTC m=+60.021725501" Feb 23 13:02:06.179349 master-0 kubenswrapper[7845]: I0223 13:02:06.179314 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_fb6381fd-efdb-4a38-956c-e057e695717c/installer/0.log" Feb 23 13:02:06.179483 master-0 kubenswrapper[7845]: I0223 13:02:06.179415 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:06.360921 master-0 kubenswrapper[7845]: I0223 13:02:06.360731 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access\") pod \"fb6381fd-efdb-4a38-956c-e057e695717c\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " Feb 23 13:02:06.360921 master-0 kubenswrapper[7845]: I0223 13:02:06.360823 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock\") pod \"fb6381fd-efdb-4a38-956c-e057e695717c\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " Feb 23 13:02:06.360921 master-0 kubenswrapper[7845]: I0223 13:02:06.360848 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir\") pod \"fb6381fd-efdb-4a38-956c-e057e695717c\" (UID: \"fb6381fd-efdb-4a38-956c-e057e695717c\") " Feb 23 13:02:06.361305 master-0 kubenswrapper[7845]: I0223 13:02:06.361081 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb6381fd-efdb-4a38-956c-e057e695717c" (UID: "fb6381fd-efdb-4a38-956c-e057e695717c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:06.362555 master-0 kubenswrapper[7845]: I0223 13:02:06.362488 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock" (OuterVolumeSpecName: "var-lock") pod "fb6381fd-efdb-4a38-956c-e057e695717c" (UID: "fb6381fd-efdb-4a38-956c-e057e695717c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:06.365569 master-0 kubenswrapper[7845]: I0223 13:02:06.365524 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb6381fd-efdb-4a38-956c-e057e695717c" (UID: "fb6381fd-efdb-4a38-956c-e057e695717c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:06.462340 master-0 kubenswrapper[7845]: I0223 13:02:06.461852 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:06.462340 master-0 kubenswrapper[7845]: I0223 13:02:06.461886 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb6381fd-efdb-4a38-956c-e057e695717c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:06.462340 master-0 kubenswrapper[7845]: I0223 13:02:06.461897 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6381fd-efdb-4a38-956c-e057e695717c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:06.471581 master-0 kubenswrapper[7845]: I0223 13:02:06.471506 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:06.471878 master-0 kubenswrapper[7845]: E0223 13:02:06.471838 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6381fd-efdb-4a38-956c-e057e695717c" containerName="installer" Feb 23 13:02:06.471878 master-0 kubenswrapper[7845]: I0223 13:02:06.471866 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6381fd-efdb-4a38-956c-e057e695717c" containerName="installer" Feb 23 13:02:06.472551 master-0 kubenswrapper[7845]: I0223 13:02:06.472004 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6381fd-efdb-4a38-956c-e057e695717c" containerName="installer" Feb 23 13:02:06.473159 master-0 kubenswrapper[7845]: I0223 13:02:06.473130 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.482498 master-0 kubenswrapper[7845]: I0223 13:02:06.482446 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:06.586567 master-0 kubenswrapper[7845]: I0223 13:02:06.586420 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:06.592318 master-0 kubenswrapper[7845]: W0223 13:02:06.592256 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee313b25_8572_48dd_bd6e_3e4762428e2b.slice/crio-506edd86c26a70c2db9d7c7abd54d977c1956396107ae5536f86ba3c3e901d3a WatchSource:0}: Error finding container 506edd86c26a70c2db9d7c7abd54d977c1956396107ae5536f86ba3c3e901d3a: Status 404 returned error can't find the container with id 506edd86c26a70c2db9d7c7abd54d977c1956396107ae5536f86ba3c3e901d3a Feb 23 13:02:06.667313 master-0 kubenswrapper[7845]: I0223 13:02:06.667218 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.667588 master-0 kubenswrapper[7845]: I0223 13:02:06.667567 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.667721 master-0 kubenswrapper[7845]: I0223 13:02:06.667705 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.770820 master-0 kubenswrapper[7845]: I0223 13:02:06.770741 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.771038 master-0 kubenswrapper[7845]: I0223 13:02:06.770879 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.771038 master-0 kubenswrapper[7845]: I0223 13:02:06.770970 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.771219 master-0 kubenswrapper[7845]: I0223 13:02:06.771170 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.771305 master-0 kubenswrapper[7845]: I0223 13:02:06.771177 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.802831 master-0 kubenswrapper[7845]: I0223 13:02:06.802773 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:06.807545 master-0 kubenswrapper[7845]: I0223 13:02:06.807490 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:07.008182 master-0 kubenswrapper[7845]: I0223 13:02:07.008137 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_fb6381fd-efdb-4a38-956c-e057e695717c/installer/0.log" Feb 23 13:02:07.008775 master-0 kubenswrapper[7845]: I0223 13:02:07.008230 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"fb6381fd-efdb-4a38-956c-e057e695717c","Type":"ContainerDied","Data":"033b5b43a8ce5215c53bacbd96dbd0c37432004ba2cd6027cd1d0098dbea988f"} Feb 23 13:02:07.008775 master-0 kubenswrapper[7845]: I0223 13:02:07.008291 7845 scope.go:117] "RemoveContainer" containerID="0916663f950d4a26eb9a051e3ef3ec191491347819daffe37ee8c82087aaed05" Feb 23 13:02:07.008775 master-0 kubenswrapper[7845]: I0223 13:02:07.008392 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 23 13:02:07.010146 master-0 kubenswrapper[7845]: I0223 13:02:07.009882 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" event={"ID":"c5c92f94-4bf1-43d3-8409-e816c8247ad8","Type":"ContainerStarted","Data":"db22d724efe1109d4fb96815acd5c0809efbc8d1daadfee4aa61045869929218"} Feb 23 13:02:07.010146 master-0 kubenswrapper[7845]: I0223 13:02:07.010105 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:07.011413 master-0 kubenswrapper[7845]: I0223 13:02:07.011374 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" event={"ID":"ee313b25-8572-48dd-bd6e-3e4762428e2b","Type":"ContainerStarted","Data":"506edd86c26a70c2db9d7c7abd54d977c1956396107ae5536f86ba3c3e901d3a"} Feb 23 13:02:07.017750 master-0 kubenswrapper[7845]: I0223 13:02:07.017491 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:07.044840 master-0 kubenswrapper[7845]: I0223 13:02:07.044771 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" podStartSLOduration=3.624901065 podStartE2EDuration="6.044563585s" podCreationTimestamp="2026-02-23 13:02:01 +0000 UTC" firstStartedPulling="2026-02-23 13:02:03.758357028 +0000 UTC m=+57.754087899" lastFinishedPulling="2026-02-23 13:02:06.178019538 +0000 UTC m=+60.173750419" observedRunningTime="2026-02-23 13:02:07.043984618 +0000 UTC m=+61.039715489" watchObservedRunningTime="2026-02-23 13:02:07.044563585 +0000 UTC m=+61.040294456" Feb 23 13:02:07.068465 master-0 kubenswrapper[7845]: I0223 13:02:07.066076 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7"] Feb 23 13:02:07.068465 master-0 kubenswrapper[7845]: I0223 13:02:07.066451 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" podUID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" containerName="cluster-version-operator" containerID="cri-o://e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161" gracePeriod=130 Feb 23 13:02:07.108841 master-0 kubenswrapper[7845]: I0223 13:02:07.104319 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:07.113781 master-0 kubenswrapper[7845]: I0223 13:02:07.113732 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 23 13:02:07.241643 master-0 kubenswrapper[7845]: I0223 13:02:07.241601 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:07.249625 master-0 kubenswrapper[7845]: W0223 13:02:07.249585 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podec7e21b6_6a6f_49c4_82bb_27a9eda8385f.slice/crio-940949d43a608aa239a5985760ca7193466535c523d1f46fef2bdab76ca68e6c WatchSource:0}: Error finding container 940949d43a608aa239a5985760ca7193466535c523d1f46fef2bdab76ca68e6c: Status 404 returned error can't find the container with id 940949d43a608aa239a5985760ca7193466535c523d1f46fef2bdab76ca68e6c Feb 23 13:02:07.826883 master-0 kubenswrapper[7845]: I0223 13:02:07.826836 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:02:08.000323 master-0 kubenswrapper[7845]: I0223 13:02:07.999955 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") pod \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " Feb 23 13:02:08.000323 master-0 kubenswrapper[7845]: I0223 13:02:08.000085 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") pod \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " Feb 23 13:02:08.000323 master-0 kubenswrapper[7845]: I0223 13:02:08.000128 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") pod \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " Feb 23 13:02:08.000323 master-0 kubenswrapper[7845]: I0223 13:02:08.000165 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") pod \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " Feb 23 13:02:08.000323 master-0 kubenswrapper[7845]: I0223 13:02:08.000198 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") pod \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\" (UID: \"b053c311-07fd-45bb-ab10-6e7b76c9aa48\") " Feb 23 13:02:08.001395 master-0 kubenswrapper[7845]: I0223 13:02:08.000986 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "b053c311-07fd-45bb-ab10-6e7b76c9aa48" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:08.001395 master-0 kubenswrapper[7845]: I0223 13:02:08.001057 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca" (OuterVolumeSpecName: "service-ca") pod "b053c311-07fd-45bb-ab10-6e7b76c9aa48" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:08.002640 master-0 kubenswrapper[7845]: I0223 13:02:08.002530 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "b053c311-07fd-45bb-ab10-6e7b76c9aa48" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:08.006726 master-0 kubenswrapper[7845]: I0223 13:02:08.006662 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b053c311-07fd-45bb-ab10-6e7b76c9aa48" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:08.008080 master-0 kubenswrapper[7845]: I0223 13:02:08.008037 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b053c311-07fd-45bb-ab10-6e7b76c9aa48" (UID: "b053c311-07fd-45bb-ab10-6e7b76c9aa48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:02:08.019372 master-0 kubenswrapper[7845]: I0223 13:02:08.019317 7845 generic.go:334] "Generic (PLEG): container finished" podID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" containerID="e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161" exitCode=0 Feb 23 13:02:08.019740 master-0 kubenswrapper[7845]: I0223 13:02:08.019406 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" event={"ID":"b053c311-07fd-45bb-ab10-6e7b76c9aa48","Type":"ContainerDied","Data":"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161"} Feb 23 13:02:08.019740 master-0 kubenswrapper[7845]: I0223 13:02:08.019441 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" event={"ID":"b053c311-07fd-45bb-ab10-6e7b76c9aa48","Type":"ContainerDied","Data":"fa3167a637f939e5683169cc2e4072a308d730dd71812369b7848e7a51a319c7"} Feb 23 13:02:08.019740 master-0 kubenswrapper[7845]: I0223 13:02:08.019483 7845 scope.go:117] "RemoveContainer" containerID="e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161" Feb 23 13:02:08.019740 master-0 kubenswrapper[7845]: I0223 13:02:08.019633 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7" Feb 23 13:02:08.025972 master-0 kubenswrapper[7845]: I0223 13:02:08.025926 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f","Type":"ContainerStarted","Data":"ba76e1e7d93596a655612bb4e3d3eb65c0e3e3e0156fb78857c022a75d37f493"} Feb 23 13:02:08.025972 master-0 kubenswrapper[7845]: I0223 13:02:08.025969 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f","Type":"ContainerStarted","Data":"940949d43a608aa239a5985760ca7193466535c523d1f46fef2bdab76ca68e6c"} Feb 23 13:02:08.048162 master-0 kubenswrapper[7845]: I0223 13:02:08.048091 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.048073767 podStartE2EDuration="2.048073767s" podCreationTimestamp="2026-02-23 13:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:08.04581016 +0000 UTC m=+62.041541031" watchObservedRunningTime="2026-02-23 13:02:08.048073767 +0000 UTC m=+62.043804638" Feb 23 13:02:08.055059 master-0 kubenswrapper[7845]: I0223 13:02:08.055027 7845 scope.go:117] "RemoveContainer" containerID="e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161" Feb 23 13:02:08.055669 master-0 kubenswrapper[7845]: E0223 13:02:08.055634 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161\": container with ID starting with e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161 not found: ID does not exist" containerID="e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161" Feb 23 13:02:08.055727 master-0 kubenswrapper[7845]: I0223 13:02:08.055682 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161"} err="failed to get container status \"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161\": rpc error: code = NotFound desc = could not find container \"e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161\": container with ID starting with e76dff128ba1e434726adb4e611ca3a3859cf4456c2ab53fa1a1a44c7a7b5161 not found: ID does not exist" Feb 23 13:02:08.061598 master-0 kubenswrapper[7845]: I0223 13:02:08.061568 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7"] Feb 23 13:02:08.063200 master-0 kubenswrapper[7845]: I0223 13:02:08.063165 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-lfpt7"] Feb 23 13:02:08.098442 master-0 kubenswrapper[7845]: I0223 13:02:08.098370 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-j4p78"] Feb 23 13:02:08.098651 master-0 kubenswrapper[7845]: E0223 13:02:08.098600 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" containerName="cluster-version-operator" Feb 23 13:02:08.098651 master-0 kubenswrapper[7845]: I0223 13:02:08.098618 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" containerName="cluster-version-operator" Feb 23 13:02:08.098775 master-0 kubenswrapper[7845]: I0223 13:02:08.098743 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" containerName="cluster-version-operator" Feb 23 13:02:08.099198 master-0 kubenswrapper[7845]: I0223 13:02:08.099167 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.101882 master-0 kubenswrapper[7845]: I0223 13:02:08.101845 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:02:08.102211 master-0 kubenswrapper[7845]: I0223 13:02:08.102168 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:02:08.106652 master-0 kubenswrapper[7845]: I0223 13:02:08.106624 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:02:08.106831 master-0 kubenswrapper[7845]: I0223 13:02:08.106790 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.107060 master-0 kubenswrapper[7845]: I0223 13:02:08.107026 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.107467 master-0 kubenswrapper[7845]: I0223 13:02:08.107425 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.107530 master-0 kubenswrapper[7845]: I0223 13:02:08.107472 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.107530 master-0 kubenswrapper[7845]: I0223 13:02:08.107501 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.107614 master-0 kubenswrapper[7845]: I0223 13:02:08.107564 7845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b053c311-07fd-45bb-ab10-6e7b76c9aa48-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:08.107614 master-0 kubenswrapper[7845]: I0223 13:02:08.107582 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b053c311-07fd-45bb-ab10-6e7b76c9aa48-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:08.107614 master-0 kubenswrapper[7845]: I0223 13:02:08.107597 7845 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:08.107614 master-0 kubenswrapper[7845]: I0223 13:02:08.107609 7845 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b053c311-07fd-45bb-ab10-6e7b76c9aa48-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:08.107725 master-0 kubenswrapper[7845]: I0223 13:02:08.107625 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b053c311-07fd-45bb-ab10-6e7b76c9aa48-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:08.208498 master-0 kubenswrapper[7845]: I0223 13:02:08.208415 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.208498 master-0 kubenswrapper[7845]: I0223 13:02:08.208465 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.208855 master-0 kubenswrapper[7845]: I0223 13:02:08.208560 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.208855 master-0 kubenswrapper[7845]: I0223 13:02:08.208602 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.208855 master-0 kubenswrapper[7845]: I0223 13:02:08.208734 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.208855 master-0 kubenswrapper[7845]: I0223 13:02:08.208838 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.209219 master-0 kubenswrapper[7845]: I0223 13:02:08.208920 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.212284 master-0 kubenswrapper[7845]: I0223 13:02:08.212188 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b053c311-07fd-45bb-ab10-6e7b76c9aa48" path="/var/lib/kubelet/pods/b053c311-07fd-45bb-ab10-6e7b76c9aa48/volumes" Feb 23 13:02:08.213361 master-0 kubenswrapper[7845]: I0223 13:02:08.212706 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6381fd-efdb-4a38-956c-e057e695717c" path="/var/lib/kubelet/pods/fb6381fd-efdb-4a38-956c-e057e695717c/volumes" Feb 23 13:02:08.213670 master-0 kubenswrapper[7845]: I0223 13:02:08.213532 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.226534 master-0 kubenswrapper[7845]: I0223 13:02:08.226474 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.228173 master-0 kubenswrapper[7845]: I0223 13:02:08.228129 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.442025 master-0 kubenswrapper[7845]: I0223 13:02:08.441941 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:02:08.507129 master-0 kubenswrapper[7845]: I0223 13:02:08.507071 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:02:08.507389 master-0 kubenswrapper[7845]: I0223 13:02:08.507337 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:02:08.515948 master-0 kubenswrapper[7845]: I0223 13:02:08.515897 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:02:09.037907 master-0 kubenswrapper[7845]: I0223 13:02:09.037866 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_a6ff6aee-649e-4ee8-9f73-eb3517297706/installer/0.log" Feb 23 13:02:09.038806 master-0 kubenswrapper[7845]: I0223 13:02:09.037918 7845 generic.go:334] "Generic (PLEG): container finished" podID="a6ff6aee-649e-4ee8-9f73-eb3517297706" containerID="f97091b8d61792d1be2f0eb4a50b8a9ee548a1277d9101dba04451e10f5f3331" exitCode=1 Feb 23 13:02:09.038806 master-0 kubenswrapper[7845]: I0223 13:02:09.038021 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a6ff6aee-649e-4ee8-9f73-eb3517297706","Type":"ContainerDied","Data":"f97091b8d61792d1be2f0eb4a50b8a9ee548a1277d9101dba04451e10f5f3331"} Feb 23 13:02:09.042079 master-0 kubenswrapper[7845]: I0223 13:02:09.041984 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" event={"ID":"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd","Type":"ContainerStarted","Data":"5db2f0540bf3595f6491e89d67843156a4d64e6dce1fba55ec53b1c3ad371af1"} Feb 23 13:02:09.042079 master-0 kubenswrapper[7845]: I0223 13:02:09.042057 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" event={"ID":"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd","Type":"ContainerStarted","Data":"8b0568f1af714331492afb936eff9364e4e1b161e76a0c02477b4d75a1981323"} Feb 23 13:02:09.051631 master-0 kubenswrapper[7845]: I0223 13:02:09.051595 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:02:09.066990 master-0 kubenswrapper[7845]: I0223 13:02:09.066867 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" podStartSLOduration=1.066855999 podStartE2EDuration="1.066855999s" podCreationTimestamp="2026-02-23 13:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:09.064718766 +0000 UTC m=+63.060449637" watchObservedRunningTime="2026-02-23 13:02:09.066855999 +0000 UTC m=+63.062586870" Feb 23 13:02:10.324509 master-0 kubenswrapper[7845]: I0223 13:02:10.324456 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_a6ff6aee-649e-4ee8-9f73-eb3517297706/installer/0.log" Feb 23 13:02:10.325172 master-0 kubenswrapper[7845]: I0223 13:02:10.324524 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:02:10.348797 master-0 kubenswrapper[7845]: I0223 13:02:10.348663 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock\") pod \"a6ff6aee-649e-4ee8-9f73-eb3517297706\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " Feb 23 13:02:10.348797 master-0 kubenswrapper[7845]: I0223 13:02:10.348766 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access\") pod \"a6ff6aee-649e-4ee8-9f73-eb3517297706\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " Feb 23 13:02:10.349140 master-0 kubenswrapper[7845]: I0223 13:02:10.348828 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir\") pod \"a6ff6aee-649e-4ee8-9f73-eb3517297706\" (UID: \"a6ff6aee-649e-4ee8-9f73-eb3517297706\") " Feb 23 13:02:10.349140 master-0 kubenswrapper[7845]: I0223 13:02:10.349039 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6ff6aee-649e-4ee8-9f73-eb3517297706" (UID: "a6ff6aee-649e-4ee8-9f73-eb3517297706"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:10.349140 master-0 kubenswrapper[7845]: I0223 13:02:10.349083 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock" (OuterVolumeSpecName: "var-lock") pod "a6ff6aee-649e-4ee8-9f73-eb3517297706" (UID: "a6ff6aee-649e-4ee8-9f73-eb3517297706"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:10.362367 master-0 kubenswrapper[7845]: I0223 13:02:10.359613 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6ff6aee-649e-4ee8-9f73-eb3517297706" (UID: "a6ff6aee-649e-4ee8-9f73-eb3517297706"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:10.454911 master-0 kubenswrapper[7845]: I0223 13:02:10.450632 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6ff6aee-649e-4ee8-9f73-eb3517297706-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:10.454911 master-0 kubenswrapper[7845]: I0223 13:02:10.450677 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:10.454911 master-0 kubenswrapper[7845]: I0223 13:02:10.450688 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6ff6aee-649e-4ee8-9f73-eb3517297706-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:11.059276 master-0 kubenswrapper[7845]: I0223 13:02:11.059213 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" event={"ID":"ee313b25-8572-48dd-bd6e-3e4762428e2b","Type":"ContainerStarted","Data":"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8"} Feb 23 13:02:11.061270 master-0 kubenswrapper[7845]: I0223 13:02:11.060129 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:11.061270 master-0 kubenswrapper[7845]: I0223 13:02:11.061124 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_a6ff6aee-649e-4ee8-9f73-eb3517297706/installer/0.log" Feb 23 13:02:11.065265 master-0 kubenswrapper[7845]: I0223 13:02:11.061580 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 23 13:02:11.065265 master-0 kubenswrapper[7845]: I0223 13:02:11.062431 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a6ff6aee-649e-4ee8-9f73-eb3517297706","Type":"ContainerDied","Data":"4be5c18a6c854aadb8ace6a50f8dda1fa624ebf315d80592a6eb921cac92c0d3"} Feb 23 13:02:11.065265 master-0 kubenswrapper[7845]: I0223 13:02:11.062599 7845 scope.go:117] "RemoveContainer" containerID="f97091b8d61792d1be2f0eb4a50b8a9ee548a1277d9101dba04451e10f5f3331" Feb 23 13:02:11.070265 master-0 kubenswrapper[7845]: I0223 13:02:11.066987 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:11.105313 master-0 kubenswrapper[7845]: I0223 13:02:11.104398 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" podStartSLOduration=6.674667647 podStartE2EDuration="10.104375781s" podCreationTimestamp="2026-02-23 13:02:01 +0000 UTC" firstStartedPulling="2026-02-23 13:02:06.59830573 +0000 UTC m=+60.594036601" lastFinishedPulling="2026-02-23 13:02:10.028013834 +0000 UTC m=+64.023744735" observedRunningTime="2026-02-23 13:02:11.102622739 +0000 UTC m=+65.098353620" watchObservedRunningTime="2026-02-23 13:02:11.104375781 +0000 UTC m=+65.100106642" Feb 23 13:02:11.164282 master-0 kubenswrapper[7845]: I0223 13:02:11.164021 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:02:11.174906 master-0 kubenswrapper[7845]: I0223 13:02:11.174840 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 23 13:02:12.215537 master-0 kubenswrapper[7845]: I0223 13:02:12.215454 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ff6aee-649e-4ee8-9f73-eb3517297706" path="/var/lib/kubelet/pods/a6ff6aee-649e-4ee8-9f73-eb3517297706/volumes" Feb 23 13:02:13.086443 master-0 kubenswrapper[7845]: I0223 13:02:13.086383 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:02:13.086721 master-0 kubenswrapper[7845]: I0223 13:02:13.086659 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" containerName="installer" containerID="cri-o://d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c" gracePeriod=30 Feb 23 13:02:14.739068 master-0 kubenswrapper[7845]: I0223 13:02:14.738992 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:02:15.210514 master-0 kubenswrapper[7845]: I0223 13:02:15.210434 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:15.210811 master-0 kubenswrapper[7845]: I0223 13:02:15.210752 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-4-master-0" podUID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" containerName="installer" containerID="cri-o://ba76e1e7d93596a655612bb4e3d3eb65c0e3e3e0156fb78857c022a75d37f493" gracePeriod=30 Feb 23 13:02:15.903890 master-0 kubenswrapper[7845]: I0223 13:02:15.903818 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:15.904547 master-0 kubenswrapper[7845]: E0223 13:02:15.904490 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ff6aee-649e-4ee8-9f73-eb3517297706" containerName="installer" Feb 23 13:02:15.904547 master-0 kubenswrapper[7845]: I0223 13:02:15.904521 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ff6aee-649e-4ee8-9f73-eb3517297706" containerName="installer" Feb 23 13:02:15.905024 master-0 kubenswrapper[7845]: I0223 13:02:15.904676 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ff6aee-649e-4ee8-9f73-eb3517297706" containerName="installer" Feb 23 13:02:15.905714 master-0 kubenswrapper[7845]: I0223 13:02:15.905534 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:15.909576 master-0 kubenswrapper[7845]: I0223 13:02:15.909526 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-t58wm" Feb 23 13:02:15.953134 master-0 kubenswrapper[7845]: I0223 13:02:15.948872 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:15.953134 master-0 kubenswrapper[7845]: I0223 13:02:15.948983 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:15.953134 master-0 kubenswrapper[7845]: I0223 13:02:15.949019 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.050712 master-0 kubenswrapper[7845]: I0223 13:02:16.050638 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.050712 master-0 kubenswrapper[7845]: I0223 13:02:16.050703 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.051036 master-0 kubenswrapper[7845]: I0223 13:02:16.050765 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.051036 master-0 kubenswrapper[7845]: I0223 13:02:16.050872 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.051036 master-0 kubenswrapper[7845]: I0223 13:02:16.050941 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.111425 master-0 kubenswrapper[7845]: I0223 13:02:16.111395 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ec7e21b6-6a6f-49c4-82bb-27a9eda8385f/installer/0.log" Feb 23 13:02:16.111596 master-0 kubenswrapper[7845]: I0223 13:02:16.111444 7845 generic.go:334] "Generic (PLEG): container finished" podID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" containerID="ba76e1e7d93596a655612bb4e3d3eb65c0e3e3e0156fb78857c022a75d37f493" exitCode=1 Feb 23 13:02:16.111596 master-0 kubenswrapper[7845]: I0223 13:02:16.111485 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f","Type":"ContainerDied","Data":"ba76e1e7d93596a655612bb4e3d3eb65c0e3e3e0156fb78857c022a75d37f493"} Feb 23 13:02:16.236286 master-0 kubenswrapper[7845]: I0223 13:02:16.230998 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:16.252996 master-0 kubenswrapper[7845]: I0223 13:02:16.252855 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ec7e21b6-6a6f-49c4-82bb-27a9eda8385f/installer/0.log" Feb 23 13:02:16.252996 master-0 kubenswrapper[7845]: I0223 13:02:16.252929 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:16.353371 master-0 kubenswrapper[7845]: I0223 13:02:16.353314 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access\") pod \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " Feb 23 13:02:16.353371 master-0 kubenswrapper[7845]: I0223 13:02:16.353383 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir\") pod \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " Feb 23 13:02:16.353769 master-0 kubenswrapper[7845]: I0223 13:02:16.353467 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock\") pod \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\" (UID: \"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f\") " Feb 23 13:02:16.353769 master-0 kubenswrapper[7845]: I0223 13:02:16.353662 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" (UID: "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:16.354087 master-0 kubenswrapper[7845]: I0223 13:02:16.353975 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock" (OuterVolumeSpecName: "var-lock") pod "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" (UID: "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:16.354664 master-0 kubenswrapper[7845]: I0223 13:02:16.354268 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:16.356468 master-0 kubenswrapper[7845]: I0223 13:02:16.356405 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" (UID: "ec7e21b6-6a6f-49c4-82bb-27a9eda8385f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:16.455883 master-0 kubenswrapper[7845]: I0223 13:02:16.455702 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:16.455883 master-0 kubenswrapper[7845]: I0223 13:02:16.455741 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:16.498214 master-0 kubenswrapper[7845]: I0223 13:02:16.498150 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:16.541899 master-0 kubenswrapper[7845]: I0223 13:02:16.541380 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:02:17.055384 master-0 kubenswrapper[7845]: I0223 13:02:17.055339 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 23 13:02:17.125396 master-0 kubenswrapper[7845]: I0223 13:02:17.125332 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2d8a9026-ee0a-44c4-9c90-cd863f5461dd","Type":"ContainerStarted","Data":"a88facd6cceb823d7867c66655ebb82fc519bdd5794630121e38248005478c94"} Feb 23 13:02:17.129962 master-0 kubenswrapper[7845]: I0223 13:02:17.129881 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ec7e21b6-6a6f-49c4-82bb-27a9eda8385f/installer/0.log" Feb 23 13:02:17.129962 master-0 kubenswrapper[7845]: I0223 13:02:17.129956 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ec7e21b6-6a6f-49c4-82bb-27a9eda8385f","Type":"ContainerDied","Data":"940949d43a608aa239a5985760ca7193466535c523d1f46fef2bdab76ca68e6c"} Feb 23 13:02:17.130495 master-0 kubenswrapper[7845]: I0223 13:02:17.129995 7845 scope.go:117] "RemoveContainer" containerID="ba76e1e7d93596a655612bb4e3d3eb65c0e3e3e0156fb78857c022a75d37f493" Feb 23 13:02:17.130495 master-0 kubenswrapper[7845]: I0223 13:02:17.130366 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 23 13:02:17.178839 master-0 kubenswrapper[7845]: I0223 13:02:17.178639 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:17.182201 master-0 kubenswrapper[7845]: I0223 13:02:17.182163 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 23 13:02:17.201019 master-0 kubenswrapper[7845]: E0223 13:02:17.200970 7845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podec7e21b6_6a6f_49c4_82bb_27a9eda8385f.slice\": RecentStats: unable to find data in memory cache]" Feb 23 13:02:17.928344 master-0 kubenswrapper[7845]: I0223 13:02:17.928206 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:02:17.928548 master-0 kubenswrapper[7845]: E0223 13:02:17.928426 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" containerName="installer" Feb 23 13:02:17.928548 master-0 kubenswrapper[7845]: I0223 13:02:17.928441 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" containerName="installer" Feb 23 13:02:17.928548 master-0 kubenswrapper[7845]: I0223 13:02:17.928523 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" containerName="installer" Feb 23 13:02:17.928884 master-0 kubenswrapper[7845]: I0223 13:02:17.928859 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:17.931569 master-0 kubenswrapper[7845]: I0223 13:02:17.931522 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-fk29t" Feb 23 13:02:17.932415 master-0 kubenswrapper[7845]: I0223 13:02:17.932355 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 23 13:02:17.945079 master-0 kubenswrapper[7845]: I0223 13:02:17.945028 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:02:17.986289 master-0 kubenswrapper[7845]: I0223 13:02:17.986195 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:17.986504 master-0 kubenswrapper[7845]: I0223 13:02:17.986318 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:17.986504 master-0 kubenswrapper[7845]: I0223 13:02:17.986386 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.087021 master-0 kubenswrapper[7845]: I0223 13:02:18.086964 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.087021 master-0 kubenswrapper[7845]: I0223 13:02:18.087023 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.087578 master-0 kubenswrapper[7845]: I0223 13:02:18.087049 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.087578 master-0 kubenswrapper[7845]: I0223 13:02:18.087182 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.087578 master-0 kubenswrapper[7845]: I0223 13:02:18.087514 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.105436 master-0 kubenswrapper[7845]: I0223 13:02:18.105377 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.139984 master-0 kubenswrapper[7845]: I0223 13:02:18.139940 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2d8a9026-ee0a-44c4-9c90-cd863f5461dd","Type":"ContainerStarted","Data":"76debd76d1c83d2501b62235b0e22ba16bdbcca50bf40d8506d768b4e775ec89"} Feb 23 13:02:18.142056 master-0 kubenswrapper[7845]: I0223 13:02:18.142024 7845 generic.go:334] "Generic (PLEG): container finished" podID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" containerID="fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb" exitCode=0 Feb 23 13:02:18.142145 master-0 kubenswrapper[7845]: I0223 13:02:18.142065 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerDied","Data":"fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb"} Feb 23 13:02:18.142397 master-0 kubenswrapper[7845]: I0223 13:02:18.142374 7845 scope.go:117] "RemoveContainer" containerID="fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb" Feb 23 13:02:18.166674 master-0 kubenswrapper[7845]: I0223 13:02:18.166596 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=3.166578183 podStartE2EDuration="3.166578183s" podCreationTimestamp="2026-02-23 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:18.165175751 +0000 UTC m=+72.160906622" watchObservedRunningTime="2026-02-23 13:02:18.166578183 +0000 UTC m=+72.162309044" Feb 23 13:02:18.215194 master-0 kubenswrapper[7845]: I0223 13:02:18.215144 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7e21b6-6a6f-49c4-82bb-27a9eda8385f" path="/var/lib/kubelet/pods/ec7e21b6-6a6f-49c4-82bb-27a9eda8385f/volumes" Feb 23 13:02:18.285816 master-0 kubenswrapper[7845]: I0223 13:02:18.285761 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:02:18.600546 master-0 kubenswrapper[7845]: I0223 13:02:18.600422 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:18.600732 master-0 kubenswrapper[7845]: I0223 13:02:18.600668 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" podUID="ee313b25-8572-48dd-bd6e-3e4762428e2b" containerName="controller-manager" containerID="cri-o://9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8" gracePeriod=30 Feb 23 13:02:18.637108 master-0 kubenswrapper[7845]: I0223 13:02:18.636801 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:18.637723 master-0 kubenswrapper[7845]: I0223 13:02:18.637654 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" podUID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" containerName="route-controller-manager" containerID="cri-o://db22d724efe1109d4fb96815acd5c0809efbc8d1daadfee4aa61045869929218" gracePeriod=30 Feb 23 13:02:18.743269 master-0 kubenswrapper[7845]: I0223 13:02:18.733618 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 23 13:02:19.132891 master-0 kubenswrapper[7845]: I0223 13:02:19.132845 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:19.154122 master-0 kubenswrapper[7845]: I0223 13:02:19.154033 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerStarted","Data":"5746b4ef817cfb0913d62f6abec0cfefcc90fea76e17ad5446db2699e58dc8b7"} Feb 23 13:02:19.156730 master-0 kubenswrapper[7845]: I0223 13:02:19.156651 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1860bead-61b8-4678-b583-c13c79575ef4","Type":"ContainerStarted","Data":"d55c80b452ec57080fce8905969e2a9fba190533481c5ba5b0159b45e85104dd"} Feb 23 13:02:19.161163 master-0 kubenswrapper[7845]: I0223 13:02:19.161094 7845 generic.go:334] "Generic (PLEG): container finished" podID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" containerID="db22d724efe1109d4fb96815acd5c0809efbc8d1daadfee4aa61045869929218" exitCode=0 Feb 23 13:02:19.161390 master-0 kubenswrapper[7845]: I0223 13:02:19.161170 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" event={"ID":"c5c92f94-4bf1-43d3-8409-e816c8247ad8","Type":"ContainerDied","Data":"db22d724efe1109d4fb96815acd5c0809efbc8d1daadfee4aa61045869929218"} Feb 23 13:02:19.168821 master-0 kubenswrapper[7845]: I0223 13:02:19.165321 7845 generic.go:334] "Generic (PLEG): container finished" podID="ee313b25-8572-48dd-bd6e-3e4762428e2b" containerID="9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8" exitCode=0 Feb 23 13:02:19.168821 master-0 kubenswrapper[7845]: I0223 13:02:19.165775 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" Feb 23 13:02:19.168821 master-0 kubenswrapper[7845]: I0223 13:02:19.165982 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" event={"ID":"ee313b25-8572-48dd-bd6e-3e4762428e2b","Type":"ContainerDied","Data":"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8"} Feb 23 13:02:19.168821 master-0 kubenswrapper[7845]: I0223 13:02:19.166010 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl" event={"ID":"ee313b25-8572-48dd-bd6e-3e4762428e2b","Type":"ContainerDied","Data":"506edd86c26a70c2db9d7c7abd54d977c1956396107ae5536f86ba3c3e901d3a"} Feb 23 13:02:19.168821 master-0 kubenswrapper[7845]: I0223 13:02:19.166032 7845 scope.go:117] "RemoveContainer" containerID="9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8" Feb 23 13:02:19.181566 master-0 kubenswrapper[7845]: I0223 13:02:19.181526 7845 scope.go:117] "RemoveContainer" containerID="9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8" Feb 23 13:02:19.182003 master-0 kubenswrapper[7845]: E0223 13:02:19.181953 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8\": container with ID starting with 9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8 not found: ID does not exist" containerID="9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8" Feb 23 13:02:19.182066 master-0 kubenswrapper[7845]: I0223 13:02:19.182006 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8"} err="failed to get container status \"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8\": rpc error: code = NotFound desc = could not find container \"9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8\": container with ID starting with 9d4ad25efdf4d6268441f017918f58d15cc173de5cec4a899e9f30ef8355c3a8 not found: ID does not exist" Feb 23 13:02:19.182911 master-0 kubenswrapper[7845]: I0223 13:02:19.182854 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.307808 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config\") pod \"ee313b25-8572-48dd-bd6e-3e4762428e2b\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.307875 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config\") pod \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.307915 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wckpk\" (UniqueName: \"kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk\") pod \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.307950 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swmzp\" (UniqueName: \"kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp\") pod \"ee313b25-8572-48dd-bd6e-3e4762428e2b\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.307990 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca\") pod \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.308033 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca\") pod \"ee313b25-8572-48dd-bd6e-3e4762428e2b\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.308065 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert\") pod \"ee313b25-8572-48dd-bd6e-3e4762428e2b\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.308140 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles\") pod \"ee313b25-8572-48dd-bd6e-3e4762428e2b\" (UID: \"ee313b25-8572-48dd-bd6e-3e4762428e2b\") " Feb 23 13:02:19.309339 master-0 kubenswrapper[7845]: I0223 13:02:19.308176 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert\") pod \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\" (UID: \"c5c92f94-4bf1-43d3-8409-e816c8247ad8\") " Feb 23 13:02:19.309819 master-0 kubenswrapper[7845]: I0223 13:02:19.309503 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config" (OuterVolumeSpecName: "config") pod "c5c92f94-4bf1-43d3-8409-e816c8247ad8" (UID: "c5c92f94-4bf1-43d3-8409-e816c8247ad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:19.310957 master-0 kubenswrapper[7845]: I0223 13:02:19.310888 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config" (OuterVolumeSpecName: "config") pod "ee313b25-8572-48dd-bd6e-3e4762428e2b" (UID: "ee313b25-8572-48dd-bd6e-3e4762428e2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:19.310957 master-0 kubenswrapper[7845]: I0223 13:02:19.310910 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ee313b25-8572-48dd-bd6e-3e4762428e2b" (UID: "ee313b25-8572-48dd-bd6e-3e4762428e2b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:19.311274 master-0 kubenswrapper[7845]: I0223 13:02:19.311211 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee313b25-8572-48dd-bd6e-3e4762428e2b" (UID: "ee313b25-8572-48dd-bd6e-3e4762428e2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:19.312519 master-0 kubenswrapper[7845]: I0223 13:02:19.312438 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5c92f94-4bf1-43d3-8409-e816c8247ad8" (UID: "c5c92f94-4bf1-43d3-8409-e816c8247ad8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:02:19.315361 master-0 kubenswrapper[7845]: I0223 13:02:19.313200 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk" (OuterVolumeSpecName: "kube-api-access-wckpk") pod "c5c92f94-4bf1-43d3-8409-e816c8247ad8" (UID: "c5c92f94-4bf1-43d3-8409-e816c8247ad8"). InnerVolumeSpecName "kube-api-access-wckpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:19.318029 master-0 kubenswrapper[7845]: I0223 13:02:19.317612 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee313b25-8572-48dd-bd6e-3e4762428e2b" (UID: "ee313b25-8572-48dd-bd6e-3e4762428e2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:02:19.318029 master-0 kubenswrapper[7845]: I0223 13:02:19.317823 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp" (OuterVolumeSpecName: "kube-api-access-swmzp") pod "ee313b25-8572-48dd-bd6e-3e4762428e2b" (UID: "ee313b25-8572-48dd-bd6e-3e4762428e2b"). InnerVolumeSpecName "kube-api-access-swmzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:19.319395 master-0 kubenswrapper[7845]: I0223 13:02:19.319329 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5c92f94-4bf1-43d3-8409-e816c8247ad8" (UID: "c5c92f94-4bf1-43d3-8409-e816c8247ad8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:02:19.409934 master-0 kubenswrapper[7845]: I0223 13:02:19.409871 7845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.409934 master-0 kubenswrapper[7845]: I0223 13:02:19.409932 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c92f94-4bf1-43d3-8409-e816c8247ad8-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.409934 master-0 kubenswrapper[7845]: I0223 13:02:19.409945 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.409934 master-0 kubenswrapper[7845]: I0223 13:02:19.409955 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.410318 master-0 kubenswrapper[7845]: I0223 13:02:19.409967 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wckpk\" (UniqueName: \"kubernetes.io/projected/c5c92f94-4bf1-43d3-8409-e816c8247ad8-kube-api-access-wckpk\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.410318 master-0 kubenswrapper[7845]: I0223 13:02:19.409981 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swmzp\" (UniqueName: \"kubernetes.io/projected/ee313b25-8572-48dd-bd6e-3e4762428e2b-kube-api-access-swmzp\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.410318 master-0 kubenswrapper[7845]: I0223 13:02:19.409990 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c92f94-4bf1-43d3-8409-e816c8247ad8-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.410318 master-0 kubenswrapper[7845]: I0223 13:02:19.409998 7845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee313b25-8572-48dd-bd6e-3e4762428e2b-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.410318 master-0 kubenswrapper[7845]: I0223 13:02:19.410007 7845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee313b25-8572-48dd-bd6e-3e4762428e2b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:19.504776 master-0 kubenswrapper[7845]: I0223 13:02:19.504647 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:19.509134 master-0 kubenswrapper[7845]: I0223 13:02:19.509075 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8457dbd4bb-hmgzl"] Feb 23 13:02:20.173148 master-0 kubenswrapper[7845]: I0223 13:02:20.173015 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1860bead-61b8-4678-b583-c13c79575ef4","Type":"ContainerStarted","Data":"923861d3e14f9f1ed180c6fc4f602226ba1fa39cb2d6ada3746794e2192c190f"} Feb 23 13:02:20.175000 master-0 kubenswrapper[7845]: I0223 13:02:20.174960 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" event={"ID":"c5c92f94-4bf1-43d3-8409-e816c8247ad8","Type":"ContainerDied","Data":"66b05e320d8f83354dceb975a24a63b4857505d935ba05a8e6ca9c2b6a0ccca5"} Feb 23 13:02:20.175085 master-0 kubenswrapper[7845]: I0223 13:02:20.175009 7845 scope.go:117] "RemoveContainer" containerID="db22d724efe1109d4fb96815acd5c0809efbc8d1daadfee4aa61045869929218" Feb 23 13:02:20.175149 master-0 kubenswrapper[7845]: I0223 13:02:20.175071 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276" Feb 23 13:02:20.212012 master-0 kubenswrapper[7845]: I0223 13:02:20.211956 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee313b25-8572-48dd-bd6e-3e4762428e2b" path="/var/lib/kubelet/pods/ee313b25-8572-48dd-bd6e-3e4762428e2b/volumes" Feb 23 13:02:20.224758 master-0 kubenswrapper[7845]: I0223 13:02:20.224640 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.22461065 podStartE2EDuration="3.22461065s" podCreationTimestamp="2026-02-23 13:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:20.199306484 +0000 UTC m=+74.195037355" watchObservedRunningTime="2026-02-23 13:02:20.22461065 +0000 UTC m=+74.220341571" Feb 23 13:02:20.225757 master-0 kubenswrapper[7845]: I0223 13:02:20.225691 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:02:20.227760 master-0 kubenswrapper[7845]: E0223 13:02:20.227654 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" containerName="route-controller-manager" Feb 23 13:02:20.227760 master-0 kubenswrapper[7845]: I0223 13:02:20.227740 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" containerName="route-controller-manager" Feb 23 13:02:20.237035 master-0 kubenswrapper[7845]: E0223 13:02:20.227756 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee313b25-8572-48dd-bd6e-3e4762428e2b" containerName="controller-manager" Feb 23 13:02:20.237035 master-0 kubenswrapper[7845]: I0223 13:02:20.237010 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee313b25-8572-48dd-bd6e-3e4762428e2b" containerName="controller-manager" Feb 23 13:02:20.237450 master-0 kubenswrapper[7845]: I0223 13:02:20.237288 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" containerName="route-controller-manager" Feb 23 13:02:20.237450 master-0 kubenswrapper[7845]: I0223 13:02:20.237310 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee313b25-8572-48dd-bd6e-3e4762428e2b" containerName="controller-manager" Feb 23 13:02:20.237895 master-0 kubenswrapper[7845]: I0223 13:02:20.237815 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:02:20.238048 master-0 kubenswrapper[7845]: I0223 13:02:20.237983 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.238598 master-0 kubenswrapper[7845]: I0223 13:02:20.238567 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.249283 master-0 kubenswrapper[7845]: I0223 13:02:20.246981 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:02:20.250108 master-0 kubenswrapper[7845]: I0223 13:02:20.249788 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-n8vwz" Feb 23 13:02:20.250108 master-0 kubenswrapper[7845]: I0223 13:02:20.250073 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:02:20.250563 master-0 kubenswrapper[7845]: I0223 13:02:20.250484 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:02:20.251317 master-0 kubenswrapper[7845]: I0223 13:02:20.251286 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:20.251634 master-0 kubenswrapper[7845]: I0223 13:02:20.251607 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wt8dr" Feb 23 13:02:20.251693 master-0 kubenswrapper[7845]: I0223 13:02:20.251680 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:02:20.251908 master-0 kubenswrapper[7845]: I0223 13:02:20.251872 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:02:20.252074 master-0 kubenswrapper[7845]: I0223 13:02:20.252035 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:20.252140 master-0 kubenswrapper[7845]: I0223 13:02:20.252091 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:02:20.252285 master-0 kubenswrapper[7845]: I0223 13:02:20.252188 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:02:20.252362 master-0 kubenswrapper[7845]: I0223 13:02:20.252332 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:02:20.252410 master-0 kubenswrapper[7845]: I0223 13:02:20.252381 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:02:20.252466 master-0 kubenswrapper[7845]: I0223 13:02:20.252417 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:02:20.268584 master-0 kubenswrapper[7845]: I0223 13:02:20.268527 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:20.268584 master-0 kubenswrapper[7845]: I0223 13:02:20.268588 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7b979c6-vb276"] Feb 23 13:02:20.270439 master-0 kubenswrapper[7845]: I0223 13:02:20.270380 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:02:20.322308 master-0 kubenswrapper[7845]: I0223 13:02:20.322214 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.322514 master-0 kubenswrapper[7845]: I0223 13:02:20.322335 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.322514 master-0 kubenswrapper[7845]: I0223 13:02:20.322371 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.322514 master-0 kubenswrapper[7845]: I0223 13:02:20.322409 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.322514 master-0 kubenswrapper[7845]: I0223 13:02:20.322442 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.322514 master-0 kubenswrapper[7845]: I0223 13:02:20.322471 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.322926 master-0 kubenswrapper[7845]: I0223 13:02:20.322549 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.322926 master-0 kubenswrapper[7845]: I0223 13:02:20.322641 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.322926 master-0 kubenswrapper[7845]: I0223 13:02:20.322675 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.423959 master-0 kubenswrapper[7845]: I0223 13:02:20.423809 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.423959 master-0 kubenswrapper[7845]: I0223 13:02:20.423877 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.423959 master-0 kubenswrapper[7845]: I0223 13:02:20.423917 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.423959 master-0 kubenswrapper[7845]: I0223 13:02:20.423952 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.424317 master-0 kubenswrapper[7845]: I0223 13:02:20.423979 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.424317 master-0 kubenswrapper[7845]: I0223 13:02:20.424004 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.424685 master-0 kubenswrapper[7845]: I0223 13:02:20.424642 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.424685 master-0 kubenswrapper[7845]: I0223 13:02:20.424682 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.424789 master-0 kubenswrapper[7845]: I0223 13:02:20.424708 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.425732 master-0 kubenswrapper[7845]: I0223 13:02:20.425688 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.425838 master-0 kubenswrapper[7845]: I0223 13:02:20.425788 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.426038 master-0 kubenswrapper[7845]: I0223 13:02:20.425998 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.426273 master-0 kubenswrapper[7845]: I0223 13:02:20.426186 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.426782 master-0 kubenswrapper[7845]: I0223 13:02:20.426745 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.427566 master-0 kubenswrapper[7845]: I0223 13:02:20.427537 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.429814 master-0 kubenswrapper[7845]: I0223 13:02:20.429776 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.446327 master-0 kubenswrapper[7845]: I0223 13:02:20.446279 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.449896 master-0 kubenswrapper[7845]: I0223 13:02:20.449830 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.603986 master-0 kubenswrapper[7845]: I0223 13:02:20.603895 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:20.631690 master-0 kubenswrapper[7845]: I0223 13:02:20.631604 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:20.984669 master-0 kubenswrapper[7845]: I0223 13:02:20.984478 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:02:20.993137 master-0 kubenswrapper[7845]: W0223 13:02:20.993062 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b48459_51ad_4b0d_8608_4ba6d3fa8e16.slice/crio-b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084 WatchSource:0}: Error finding container b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084: Status 404 returned error can't find the container with id b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084 Feb 23 13:02:21.115585 master-0 kubenswrapper[7845]: I0223 13:02:21.115498 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:02:21.125073 master-0 kubenswrapper[7845]: W0223 13:02:21.124990 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb53d3c98_e99c_4f4e_a9dc_91e3ad30efaa.slice/crio-9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b WatchSource:0}: Error finding container 9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b: Status 404 returned error can't find the container with id 9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b Feb 23 13:02:21.187690 master-0 kubenswrapper[7845]: I0223 13:02:21.187618 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" event={"ID":"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa","Type":"ContainerStarted","Data":"9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b"} Feb 23 13:02:21.190792 master-0 kubenswrapper[7845]: I0223 13:02:21.190676 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerStarted","Data":"cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb"} Feb 23 13:02:21.190792 master-0 kubenswrapper[7845]: I0223 13:02:21.190773 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerStarted","Data":"b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084"} Feb 23 13:02:21.213102 master-0 kubenswrapper[7845]: I0223 13:02:21.212997 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podStartSLOduration=3.212968825 podStartE2EDuration="3.212968825s" podCreationTimestamp="2026-02-23 13:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:21.212173461 +0000 UTC m=+75.207904422" watchObservedRunningTime="2026-02-23 13:02:21.212968825 +0000 UTC m=+75.208699716" Feb 23 13:02:21.724538 master-0 kubenswrapper[7845]: I0223 13:02:21.724486 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w"] Feb 23 13:02:21.725271 master-0 kubenswrapper[7845]: I0223 13:02:21.725229 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.733274 master-0 kubenswrapper[7845]: I0223 13:02:21.730259 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:02:21.733274 master-0 kubenswrapper[7845]: I0223 13:02:21.731074 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:02:21.733274 master-0 kubenswrapper[7845]: I0223 13:02:21.731572 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:02:21.739008 master-0 kubenswrapper[7845]: I0223 13:02:21.738969 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxjzf" Feb 23 13:02:21.740675 master-0 kubenswrapper[7845]: I0223 13:02:21.740642 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w"] Feb 23 13:02:21.744111 master-0 kubenswrapper[7845]: I0223 13:02:21.744079 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265wg\" (UniqueName: \"kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.744171 master-0 kubenswrapper[7845]: I0223 13:02:21.744155 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.845268 master-0 kubenswrapper[7845]: I0223 13:02:21.845141 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.845512 master-0 kubenswrapper[7845]: I0223 13:02:21.845336 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265wg\" (UniqueName: \"kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.850296 master-0 kubenswrapper[7845]: I0223 13:02:21.850233 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:21.876952 master-0 kubenswrapper[7845]: I0223 13:02:21.876892 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265wg\" (UniqueName: \"kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:22.039758 master-0 kubenswrapper[7845]: I0223 13:02:22.039610 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:02:22.221319 master-0 kubenswrapper[7845]: I0223 13:02:22.218425 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c92f94-4bf1-43d3-8409-e816c8247ad8" path="/var/lib/kubelet/pods/c5c92f94-4bf1-43d3-8409-e816c8247ad8/volumes" Feb 23 13:02:22.221319 master-0 kubenswrapper[7845]: I0223 13:02:22.219090 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" event={"ID":"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa","Type":"ContainerStarted","Data":"022c9b5345f424d899a3eb1c0e7a0d156bb27c5c3be0d99e29d7ec4cb8956ba6"} Feb 23 13:02:22.221319 master-0 kubenswrapper[7845]: I0223 13:02:22.219144 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:22.221319 master-0 kubenswrapper[7845]: I0223 13:02:22.220882 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:02:22.327918 master-0 kubenswrapper[7845]: I0223 13:02:22.327580 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" podStartSLOduration=4.327553119 podStartE2EDuration="4.327553119s" podCreationTimestamp="2026-02-23 13:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:02:22.327086255 +0000 UTC m=+76.322817156" watchObservedRunningTime="2026-02-23 13:02:22.327553119 +0000 UTC m=+76.323284060" Feb 23 13:02:22.507311 master-0 kubenswrapper[7845]: I0223 13:02:22.506174 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w"] Feb 23 13:02:22.518703 master-0 kubenswrapper[7845]: W0223 13:02:22.518634 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc22782_a369_48aa_a0e8_c1c63ffa3053.slice/crio-b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2 WatchSource:0}: Error finding container b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2: Status 404 returned error can't find the container with id b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2 Feb 23 13:02:23.213725 master-0 kubenswrapper[7845]: I0223 13:02:23.213663 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/0.log" Feb 23 13:02:23.213725 master-0 kubenswrapper[7845]: I0223 13:02:23.213732 7845 generic.go:334] "Generic (PLEG): container finished" podID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" containerID="debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920" exitCode=1 Feb 23 13:02:23.214029 master-0 kubenswrapper[7845]: I0223 13:02:23.213777 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerDied","Data":"debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920"} Feb 23 13:02:23.214330 master-0 kubenswrapper[7845]: I0223 13:02:23.214302 7845 scope.go:117] "RemoveContainer" containerID="debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920" Feb 23 13:02:23.216885 master-0 kubenswrapper[7845]: I0223 13:02:23.216759 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerStarted","Data":"b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2"} Feb 23 13:02:23.217184 master-0 kubenswrapper[7845]: I0223 13:02:23.217136 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:23.223488 master-0 kubenswrapper[7845]: I0223 13:02:23.223121 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:02:24.235498 master-0 kubenswrapper[7845]: I0223 13:02:24.234646 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/0.log" Feb 23 13:02:24.235498 master-0 kubenswrapper[7845]: I0223 13:02:24.235365 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerStarted","Data":"276f3b55300c4b42b7df0ff3b3561d901d7c658a4848ac016dd56a91f3b44118"} Feb 23 13:02:24.934407 master-0 kubenswrapper[7845]: I0223 13:02:24.934311 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg"] Feb 23 13:02:24.935041 master-0 kubenswrapper[7845]: I0223 13:02:24.935025 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:24.936830 master-0 kubenswrapper[7845]: W0223 13:02:24.936790 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.936906 master-0 kubenswrapper[7845]: E0223 13:02:24.936853 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:24.936906 master-0 kubenswrapper[7845]: W0223 13:02:24.936790 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.936988 master-0 kubenswrapper[7845]: E0223 13:02:24.936899 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:24.937357 master-0 kubenswrapper[7845]: W0223 13:02:24.937329 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-8ph7r" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.937398 master-0 kubenswrapper[7845]: E0223 13:02:24.937372 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-8ph7r\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-8ph7r\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:24.937440 master-0 kubenswrapper[7845]: W0223 13:02:24.937402 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.937440 master-0 kubenswrapper[7845]: E0223 13:02:24.937429 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:24.937756 master-0 kubenswrapper[7845]: W0223 13:02:24.937734 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.937795 master-0 kubenswrapper[7845]: E0223 13:02:24.937762 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:24.937795 master-0 kubenswrapper[7845]: W0223 13:02:24.937740 7845 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'master-0' and this object Feb 23 13:02:24.937795 master-0 kubenswrapper[7845]: E0223 13:02:24.937788 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 23 13:02:25.115262 master-0 kubenswrapper[7845]: I0223 13:02:25.115165 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.115517 master-0 kubenswrapper[7845]: I0223 13:02:25.115350 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnpf\" (UniqueName: \"kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.115517 master-0 kubenswrapper[7845]: I0223 13:02:25.115454 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.115517 master-0 kubenswrapper[7845]: I0223 13:02:25.115490 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.216132 master-0 kubenswrapper[7845]: I0223 13:02:25.216083 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnpf\" (UniqueName: \"kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.216302 master-0 kubenswrapper[7845]: I0223 13:02:25.216166 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.216302 master-0 kubenswrapper[7845]: I0223 13:02:25.216228 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.216476 master-0 kubenswrapper[7845]: I0223 13:02:25.216442 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.243168 master-0 kubenswrapper[7845]: I0223 13:02:25.243120 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerStarted","Data":"0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703"} Feb 23 13:02:25.262297 master-0 kubenswrapper[7845]: I0223 13:02:25.262223 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" podStartSLOduration=2.259214594 podStartE2EDuration="4.26220424s" podCreationTimestamp="2026-02-23 13:02:21 +0000 UTC" firstStartedPulling="2026-02-23 13:02:22.523669957 +0000 UTC m=+76.519400868" lastFinishedPulling="2026-02-23 13:02:24.526659613 +0000 UTC m=+78.522390514" observedRunningTime="2026-02-23 13:02:25.261556871 +0000 UTC m=+79.257287782" watchObservedRunningTime="2026-02-23 13:02:25.26220424 +0000 UTC m=+79.257935111" Feb 23 13:02:25.790446 master-0 kubenswrapper[7845]: I0223 13:02:25.790345 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:02:25.802299 master-0 kubenswrapper[7845]: I0223 13:02:25.802232 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:25.974891 master-0 kubenswrapper[7845]: I0223 13:02:25.974786 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:02:26.009443 master-0 kubenswrapper[7845]: I0223 13:02:26.009353 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:02:26.017799 master-0 kubenswrapper[7845]: I0223 13:02:26.017741 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:26.193482 master-0 kubenswrapper[7845]: I0223 13:02:26.193406 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r" Feb 23 13:02:26.218432 master-0 kubenswrapper[7845]: E0223 13:02:26.217709 7845 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:02:26.218432 master-0 kubenswrapper[7845]: E0223 13:02:26.217872 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config podName:21c55fd9-96b6-4dbb-9c26-a499a76cb259 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:26.717829772 +0000 UTC m=+80.713560683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config") pod "machine-approver-798b897698-j6dvg" (UID: "21c55fd9-96b6-4dbb-9c26-a499a76cb259") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:02:26.224012 master-0 kubenswrapper[7845]: I0223 13:02:26.223951 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:02:26.236049 master-0 kubenswrapper[7845]: I0223 13:02:26.235983 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnpf\" (UniqueName: \"kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:26.399870 master-0 kubenswrapper[7845]: I0223 13:02:26.399787 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:02:26.755579 master-0 kubenswrapper[7845]: I0223 13:02:26.755487 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:26.756539 master-0 kubenswrapper[7845]: I0223 13:02:26.756481 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") pod \"machine-approver-798b897698-j6dvg\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:27.052298 master-0 kubenswrapper[7845]: I0223 13:02:27.052172 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:02:27.265870 master-0 kubenswrapper[7845]: I0223 13:02:27.265828 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerStarted","Data":"0c69dec4a845a27a998ea351ea64ca562e17d952ed5877d2399e163463006b53"} Feb 23 13:02:27.883402 master-0 kubenswrapper[7845]: I0223 13:02:27.883330 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v"] Feb 23 13:02:27.885774 master-0 kubenswrapper[7845]: I0223 13:02:27.885721 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:27.890217 master-0 kubenswrapper[7845]: I0223 13:02:27.890119 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-zmzm6" Feb 23 13:02:27.897888 master-0 kubenswrapper[7845]: I0223 13:02:27.890368 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:02:27.900223 master-0 kubenswrapper[7845]: I0223 13:02:27.900140 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:02:27.900744 master-0 kubenswrapper[7845]: I0223 13:02:27.900659 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:02:27.905177 master-0 kubenswrapper[7845]: I0223 13:02:27.905062 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:02:27.917984 master-0 kubenswrapper[7845]: I0223 13:02:27.917860 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v"] Feb 23 13:02:28.077272 master-0 kubenswrapper[7845]: I0223 13:02:28.076692 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.077272 master-0 kubenswrapper[7845]: I0223 13:02:28.076800 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.077272 master-0 kubenswrapper[7845]: I0223 13:02:28.076923 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs2l\" (UniqueName: \"kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.178758 master-0 kubenswrapper[7845]: I0223 13:02:28.178694 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.178925 master-0 kubenswrapper[7845]: I0223 13:02:28.178834 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.179161 master-0 kubenswrapper[7845]: I0223 13:02:28.179082 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zs2l\" (UniqueName: \"kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.181532 master-0 kubenswrapper[7845]: I0223 13:02:28.181500 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.186482 master-0 kubenswrapper[7845]: I0223 13:02:28.186430 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.204334 master-0 kubenswrapper[7845]: I0223 13:02:28.199510 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zs2l\" (UniqueName: \"kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.259524 master-0 kubenswrapper[7845]: I0223 13:02:28.259476 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:02:28.270224 master-0 kubenswrapper[7845]: I0223 13:02:28.270173 7845 generic.go:334] "Generic (PLEG): container finished" podID="25b5540c-da7d-4b6f-a15f-394451f4674e" containerID="c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39" exitCode=0 Feb 23 13:02:28.270420 master-0 kubenswrapper[7845]: I0223 13:02:28.270252 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerDied","Data":"c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39"} Feb 23 13:02:28.270729 master-0 kubenswrapper[7845]: I0223 13:02:28.270700 7845 scope.go:117] "RemoveContainer" containerID="c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39" Feb 23 13:02:28.273066 master-0 kubenswrapper[7845]: I0223 13:02:28.272819 7845 generic.go:334] "Generic (PLEG): container finished" podID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" containerID="3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9" exitCode=0 Feb 23 13:02:28.273066 master-0 kubenswrapper[7845]: I0223 13:02:28.272867 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerDied","Data":"3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9"} Feb 23 13:02:28.274479 master-0 kubenswrapper[7845]: I0223 13:02:28.274180 7845 scope.go:117] "RemoveContainer" containerID="3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9" Feb 23 13:02:28.277424 master-0 kubenswrapper[7845]: I0223 13:02:28.276864 7845 generic.go:334] "Generic (PLEG): container finished" podID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" containerID="723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830" exitCode=0 Feb 23 13:02:28.277424 master-0 kubenswrapper[7845]: I0223 13:02:28.276908 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerDied","Data":"723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830"} Feb 23 13:02:28.277424 master-0 kubenswrapper[7845]: I0223 13:02:28.277167 7845 scope.go:117] "RemoveContainer" containerID="723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830" Feb 23 13:02:28.279770 master-0 kubenswrapper[7845]: I0223 13:02:28.279696 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerStarted","Data":"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1"} Feb 23 13:02:28.703093 master-0 kubenswrapper[7845]: I0223 13:02:28.703015 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf"] Feb 23 13:02:28.705233 master-0 kubenswrapper[7845]: I0223 13:02:28.705025 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.708503 master-0 kubenswrapper[7845]: I0223 13:02:28.707136 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:02:28.708503 master-0 kubenswrapper[7845]: I0223 13:02:28.707356 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:02:28.708656 master-0 kubenswrapper[7845]: I0223 13:02:28.708538 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:02:28.708822 master-0 kubenswrapper[7845]: I0223 13:02:28.708803 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zmw9t" Feb 23 13:02:28.717985 master-0 kubenswrapper[7845]: I0223 13:02:28.717259 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf"] Feb 23 13:02:28.798259 master-0 kubenswrapper[7845]: I0223 13:02:28.793167 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlwn\" (UniqueName: \"kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.798259 master-0 kubenswrapper[7845]: I0223 13:02:28.793337 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.802104 master-0 kubenswrapper[7845]: I0223 13:02:28.802021 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v"] Feb 23 13:02:28.814358 master-0 kubenswrapper[7845]: W0223 13:02:28.814286 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32952be_0fe3_431f_aa8f_6a35159fa845.slice/crio-9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653 WatchSource:0}: Error finding container 9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653: Status 404 returned error can't find the container with id 9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653 Feb 23 13:02:28.895169 master-0 kubenswrapper[7845]: I0223 13:02:28.894500 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlwn\" (UniqueName: \"kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.895169 master-0 kubenswrapper[7845]: I0223 13:02:28.894579 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.899953 master-0 kubenswrapper[7845]: I0223 13:02:28.899911 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:28.912714 master-0 kubenswrapper[7845]: I0223 13:02:28.912663 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlwn\" (UniqueName: \"kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:29.027906 master-0 kubenswrapper[7845]: I0223 13:02:29.027775 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:02:29.283660 master-0 kubenswrapper[7845]: I0223 13:02:29.282673 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:02:29.288569 master-0 kubenswrapper[7845]: I0223 13:02:29.288521 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerStarted","Data":"90c4d565bc8a9a3504b08ffb42ce37fbe9564d90f4149f9a2efe531a546f0e50"} Feb 23 13:02:29.290637 master-0 kubenswrapper[7845]: I0223 13:02:29.290603 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerStarted","Data":"93e9de56164a0387038f634504ac664a837d38dcf48d420691331e0584258696"} Feb 23 13:02:29.293037 master-0 kubenswrapper[7845]: I0223 13:02:29.292999 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerStarted","Data":"6eb708e99faa68cc0fb3a1744a6c33cf30aa202ca3b55e421e64cd3dbc5a07f1"} Feb 23 13:02:29.295769 master-0 kubenswrapper[7845]: I0223 13:02:29.295715 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" event={"ID":"d32952be-0fe3-431f-aa8f-6a35159fa845","Type":"ContainerStarted","Data":"b404e3837f83a7c5868973e390a0b6951789b1b00d050c98f5efd9ddceeb5841"} Feb 23 13:02:29.295833 master-0 kubenswrapper[7845]: I0223 13:02:29.295780 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" event={"ID":"d32952be-0fe3-431f-aa8f-6a35159fa845","Type":"ContainerStarted","Data":"9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653"} Feb 23 13:02:29.866635 master-0 kubenswrapper[7845]: I0223 13:02:29.865922 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2"] Feb 23 13:02:29.867101 master-0 kubenswrapper[7845]: I0223 13:02:29.867001 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:29.870106 master-0 kubenswrapper[7845]: I0223 13:02:29.869822 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-4q8qn" Feb 23 13:02:29.870334 master-0 kubenswrapper[7845]: I0223 13:02:29.870111 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:02:29.878183 master-0 kubenswrapper[7845]: I0223 13:02:29.877892 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:02:29.878676 master-0 kubenswrapper[7845]: I0223 13:02:29.877636 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:02:29.885807 master-0 kubenswrapper[7845]: I0223 13:02:29.883620 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2"] Feb 23 13:02:29.888600 master-0 kubenswrapper[7845]: I0223 13:02:29.886157 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:02:29.958954 master-0 kubenswrapper[7845]: I0223 13:02:29.958602 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:29.958954 master-0 kubenswrapper[7845]: I0223 13:02:29.958703 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:29.958954 master-0 kubenswrapper[7845]: I0223 13:02:29.958766 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:29.958954 master-0 kubenswrapper[7845]: I0223 13:02:29.958804 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmk8\" (UniqueName: \"kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:29.958954 master-0 kubenswrapper[7845]: I0223 13:02:29.958846 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.060202 master-0 kubenswrapper[7845]: I0223 13:02:30.060138 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.060510 master-0 kubenswrapper[7845]: I0223 13:02:30.060325 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmk8\" (UniqueName: \"kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.060510 master-0 kubenswrapper[7845]: I0223 13:02:30.060418 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.060649 master-0 kubenswrapper[7845]: I0223 13:02:30.060541 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.060724 master-0 kubenswrapper[7845]: I0223 13:02:30.060688 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.061899 master-0 kubenswrapper[7845]: I0223 13:02:30.061074 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.061899 master-0 kubenswrapper[7845]: I0223 13:02:30.061701 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.065318 master-0 kubenswrapper[7845]: I0223 13:02:30.065197 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.066441 master-0 kubenswrapper[7845]: I0223 13:02:30.066376 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.081313 master-0 kubenswrapper[7845]: I0223 13:02:30.081211 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmk8\" (UniqueName: \"kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.195990 master-0 kubenswrapper[7845]: I0223 13:02:30.195928 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:02:30.226079 master-0 kubenswrapper[7845]: I0223 13:02:30.226019 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf"] Feb 23 13:02:30.307875 master-0 kubenswrapper[7845]: I0223 13:02:30.307820 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerStarted","Data":"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17"} Feb 23 13:02:30.328540 master-0 kubenswrapper[7845]: I0223 13:02:30.328211 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" podStartSLOduration=3.946372013 podStartE2EDuration="6.328188697s" podCreationTimestamp="2026-02-23 13:02:24 +0000 UTC" firstStartedPulling="2026-02-23 13:02:27.431227337 +0000 UTC m=+81.426958218" lastFinishedPulling="2026-02-23 13:02:29.813044031 +0000 UTC m=+83.808774902" observedRunningTime="2026-02-23 13:02:30.327636781 +0000 UTC m=+84.323367652" watchObservedRunningTime="2026-02-23 13:02:30.328188697 +0000 UTC m=+84.323919568" Feb 23 13:02:30.678907 master-0 kubenswrapper[7845]: I0223 13:02:30.678750 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2"] Feb 23 13:02:30.766891 master-0 kubenswrapper[7845]: I0223 13:02:30.766818 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p"] Feb 23 13:02:30.767942 master-0 kubenswrapper[7845]: I0223 13:02:30.767909 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.771570 master-0 kubenswrapper[7845]: I0223 13:02:30.771526 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:02:30.771781 master-0 kubenswrapper[7845]: I0223 13:02:30.771751 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:02:30.773320 master-0 kubenswrapper[7845]: I0223 13:02:30.770906 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-dldvx" Feb 23 13:02:30.793149 master-0 kubenswrapper[7845]: I0223 13:02:30.793098 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p"] Feb 23 13:02:30.869603 master-0 kubenswrapper[7845]: I0223 13:02:30.869564 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.869741 master-0 kubenswrapper[7845]: I0223 13:02:30.869619 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.869741 master-0 kubenswrapper[7845]: I0223 13:02:30.869656 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhl9t\" (UniqueName: \"kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.971671 master-0 kubenswrapper[7845]: I0223 13:02:30.971612 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.972320 master-0 kubenswrapper[7845]: I0223 13:02:30.971706 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.972320 master-0 kubenswrapper[7845]: I0223 13:02:30.971783 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhl9t\" (UniqueName: \"kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.972683 master-0 kubenswrapper[7845]: I0223 13:02:30.972642 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.974940 master-0 kubenswrapper[7845]: I0223 13:02:30.974911 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:30.989376 master-0 kubenswrapper[7845]: I0223 13:02:30.989314 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhl9t\" (UniqueName: \"kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:31.117444 master-0 kubenswrapper[7845]: I0223 13:02:31.117363 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:02:31.201037 master-0 kubenswrapper[7845]: I0223 13:02:31.199727 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-xltpx"] Feb 23 13:02:31.201037 master-0 kubenswrapper[7845]: I0223 13:02:31.200536 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.203342 master-0 kubenswrapper[7845]: I0223 13:02:31.203299 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-wbd45" Feb 23 13:02:31.203809 master-0 kubenswrapper[7845]: I0223 13:02:31.203667 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:02:31.203809 master-0 kubenswrapper[7845]: I0223 13:02:31.203714 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:02:31.203980 master-0 kubenswrapper[7845]: I0223 13:02:31.203933 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:02:31.204305 master-0 kubenswrapper[7845]: I0223 13:02:31.204103 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:02:31.212828 master-0 kubenswrapper[7845]: I0223 13:02:31.212771 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-xltpx"] Feb 23 13:02:31.221569 master-0 kubenswrapper[7845]: I0223 13:02:31.221075 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:02:31.277012 master-0 kubenswrapper[7845]: I0223 13:02:31.276721 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.277012 master-0 kubenswrapper[7845]: I0223 13:02:31.276803 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.277012 master-0 kubenswrapper[7845]: I0223 13:02:31.276834 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdtv\" (UniqueName: \"kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.277012 master-0 kubenswrapper[7845]: I0223 13:02:31.276870 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.277012 master-0 kubenswrapper[7845]: I0223 13:02:31.276894 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.294725 master-0 kubenswrapper[7845]: I0223 13:02:31.294640 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm"] Feb 23 13:02:31.295734 master-0 kubenswrapper[7845]: I0223 13:02:31.295449 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.298504 master-0 kubenswrapper[7845]: I0223 13:02:31.298268 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:02:31.298504 master-0 kubenswrapper[7845]: I0223 13:02:31.298237 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-h78lq" Feb 23 13:02:31.305639 master-0 kubenswrapper[7845]: I0223 13:02:31.305607 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm"] Feb 23 13:02:31.317613 master-0 kubenswrapper[7845]: I0223 13:02:31.317553 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"623b2142d274970e84b3bbba2aa8e77e527e6d06e0243078dfae6d82495ba0a1"} Feb 23 13:02:31.319426 master-0 kubenswrapper[7845]: I0223 13:02:31.319402 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" event={"ID":"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab","Type":"ContainerStarted","Data":"1e39861f7eba3a69549695ea713f86bb313f7b6a9495d969cd59f6af1de1fb17"} Feb 23 13:02:31.379511 master-0 kubenswrapper[7845]: I0223 13:02:31.379465 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2857n\" (UniqueName: \"kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.380460 master-0 kubenswrapper[7845]: I0223 13:02:31.380414 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.380705 master-0 kubenswrapper[7845]: I0223 13:02:31.380661 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.380848 master-0 kubenswrapper[7845]: I0223 13:02:31.380829 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdtv\" (UniqueName: \"kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.380944 master-0 kubenswrapper[7845]: I0223 13:02:31.380926 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.381115 master-0 kubenswrapper[7845]: I0223 13:02:31.381097 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.381225 master-0 kubenswrapper[7845]: I0223 13:02:31.381208 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.382000 master-0 kubenswrapper[7845]: I0223 13:02:31.381953 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.382412 master-0 kubenswrapper[7845]: I0223 13:02:31.382386 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.383160 master-0 kubenswrapper[7845]: I0223 13:02:31.383096 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.383969 master-0 kubenswrapper[7845]: I0223 13:02:31.383931 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.389964 master-0 kubenswrapper[7845]: I0223 13:02:31.389806 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.403281 master-0 kubenswrapper[7845]: I0223 13:02:31.403197 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdtv\" (UniqueName: \"kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.417580 master-0 kubenswrapper[7845]: I0223 13:02:31.417541 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859"] Feb 23 13:02:31.418170 master-0 kubenswrapper[7845]: I0223 13:02:31.418142 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.419809 master-0 kubenswrapper[7845]: I0223 13:02:31.419748 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-f5gf8" Feb 23 13:02:31.421059 master-0 kubenswrapper[7845]: I0223 13:02:31.421026 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:02:31.435290 master-0 kubenswrapper[7845]: I0223 13:02:31.435256 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859"] Feb 23 13:02:31.484938 master-0 kubenswrapper[7845]: I0223 13:02:31.484856 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.484938 master-0 kubenswrapper[7845]: I0223 13:02:31.484934 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdqd6\" (UniqueName: \"kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.485285 master-0 kubenswrapper[7845]: I0223 13:02:31.484990 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.485285 master-0 kubenswrapper[7845]: I0223 13:02:31.485027 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2857n\" (UniqueName: \"kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.485285 master-0 kubenswrapper[7845]: I0223 13:02:31.485054 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.487856 master-0 kubenswrapper[7845]: I0223 13:02:31.487797 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.494999 master-0 kubenswrapper[7845]: I0223 13:02:31.494969 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.510563 master-0 kubenswrapper[7845]: I0223 13:02:31.510338 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2857n\" (UniqueName: \"kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.537704 master-0 kubenswrapper[7845]: I0223 13:02:31.537652 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:02:31.549670 master-0 kubenswrapper[7845]: I0223 13:02:31.549615 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p"] Feb 23 13:02:31.575816 master-0 kubenswrapper[7845]: W0223 13:02:31.575769 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d85c030_4931_42d7_afd6_72b41789aea8.slice/crio-e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb WatchSource:0}: Error finding container e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb: Status 404 returned error can't find the container with id e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb Feb 23 13:02:31.587507 master-0 kubenswrapper[7845]: I0223 13:02:31.587470 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqd6\" (UniqueName: \"kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.587615 master-0 kubenswrapper[7845]: I0223 13:02:31.587533 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.596386 master-0 kubenswrapper[7845]: I0223 13:02:31.596342 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.612674 master-0 kubenswrapper[7845]: I0223 13:02:31.612614 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdqd6\" (UniqueName: \"kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.621211 master-0 kubenswrapper[7845]: I0223 13:02:31.621143 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:02:31.691169 master-0 kubenswrapper[7845]: I0223 13:02:31.691120 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s"] Feb 23 13:02:31.691923 master-0 kubenswrapper[7845]: I0223 13:02:31.691878 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.699555 master-0 kubenswrapper[7845]: I0223 13:02:31.699506 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:02:31.699772 master-0 kubenswrapper[7845]: I0223 13:02:31.699584 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:02:31.699772 master-0 kubenswrapper[7845]: I0223 13:02:31.699778 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:02:31.699876 master-0 kubenswrapper[7845]: I0223 13:02:31.699783 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:02:31.699987 master-0 kubenswrapper[7845]: I0223 13:02:31.699884 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:02:31.700263 master-0 kubenswrapper[7845]: I0223 13:02:31.700220 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-5499c" Feb 23 13:02:31.715400 master-0 kubenswrapper[7845]: I0223 13:02:31.709861 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s"] Feb 23 13:02:31.727195 master-0 kubenswrapper[7845]: I0223 13:02:31.727132 7845 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:02:31.729573 master-0 kubenswrapper[7845]: I0223 13:02:31.727380 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://b58d0f68f1bce11a0ca3232dc9f5a8f1bbd2f9babb595ae60e80f32714fa923e" gracePeriod=30 Feb 23 13:02:31.729573 master-0 kubenswrapper[7845]: I0223 13:02:31.727694 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://b2243c1b0e1a884637ce32ff21a340a8fd2d151e689c0ac21c3f49c0279d57f8" gracePeriod=30 Feb 23 13:02:31.730517 master-0 kubenswrapper[7845]: I0223 13:02:31.730489 7845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:02:31.731000 master-0 kubenswrapper[7845]: E0223 13:02:31.730953 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:31.751424 master-0 kubenswrapper[7845]: I0223 13:02:31.751388 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:31.751640 master-0 kubenswrapper[7845]: E0223 13:02:31.751629 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:31.751700 master-0 kubenswrapper[7845]: I0223 13:02:31.751691 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:31.751928 master-0 kubenswrapper[7845]: I0223 13:02:31.751916 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 23 13:02:31.752006 master-0 kubenswrapper[7845]: I0223 13:02:31.751986 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 23 13:02:31.753486 master-0 kubenswrapper[7845]: I0223 13:02:31.753471 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.762352 master-0 kubenswrapper[7845]: I0223 13:02:31.762104 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef/installer/0.log" Feb 23 13:02:31.762352 master-0 kubenswrapper[7845]: I0223 13:02:31.762165 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.796671 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock\") pod \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.796840 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access\") pod \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.796882 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir\") pod \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\" (UID: \"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef\") " Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797061 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797099 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797139 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797168 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797201 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797223 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797281 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797303 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797330 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797354 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797519 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" (UID: "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.797551 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock" (OuterVolumeSpecName: "var-lock") pod "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" (UID: "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:31.805490 master-0 kubenswrapper[7845]: I0223 13:02:31.802886 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" (UID: "a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:31.859576 master-0 kubenswrapper[7845]: I0223 13:02:31.859525 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:02:31.898365 master-0 kubenswrapper[7845]: I0223 13:02:31.898317 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.898465 master-0 kubenswrapper[7845]: I0223 13:02:31.898386 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.898465 master-0 kubenswrapper[7845]: I0223 13:02:31.898446 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.898524 master-0 kubenswrapper[7845]: I0223 13:02:31.898475 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.898524 master-0 kubenswrapper[7845]: I0223 13:02:31.898512 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.898575 master-0 kubenswrapper[7845]: I0223 13:02:31.898543 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.898741 master-0 kubenswrapper[7845]: I0223 13:02:31.898713 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.898827 master-0 kubenswrapper[7845]: I0223 13:02:31.898802 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.898947 master-0 kubenswrapper[7845]: I0223 13:02:31.898774 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899039 master-0 kubenswrapper[7845]: I0223 13:02:31.899026 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.899152 master-0 kubenswrapper[7845]: I0223 13:02:31.899140 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899221 master-0 kubenswrapper[7845]: I0223 13:02:31.899053 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899396 master-0 kubenswrapper[7845]: I0223 13:02:31.899344 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899491 master-0 kubenswrapper[7845]: I0223 13:02:31.899471 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:31.899491 master-0 kubenswrapper[7845]: I0223 13:02:31.899491 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:31.899570 master-0 kubenswrapper[7845]: I0223 13:02:31.899503 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:31.899570 master-0 kubenswrapper[7845]: I0223 13:02:31.899377 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899570 master-0 kubenswrapper[7845]: E0223 13:02:31.899356 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:31.899653 master-0 kubenswrapper[7845]: E0223 13:02:31.899586 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:32.399565417 +0000 UTC m=+86.395296288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:31.899797 master-0 kubenswrapper[7845]: I0223 13:02:31.899783 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.899865 master-0 kubenswrapper[7845]: I0223 13:02:31.899838 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:31.900070 master-0 kubenswrapper[7845]: I0223 13:02:31.899994 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:02:31.904775 master-0 kubenswrapper[7845]: I0223 13:02:31.904759 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:32.327394 master-0 kubenswrapper[7845]: I0223 13:02:32.327356 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef/installer/0.log" Feb 23 13:02:32.327750 master-0 kubenswrapper[7845]: I0223 13:02:32.327428 7845 generic.go:334] "Generic (PLEG): container finished" podID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" containerID="d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c" exitCode=1 Feb 23 13:02:32.327750 master-0 kubenswrapper[7845]: I0223 13:02:32.327504 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef","Type":"ContainerDied","Data":"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c"} Feb 23 13:02:32.327750 master-0 kubenswrapper[7845]: I0223 13:02:32.327523 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 23 13:02:32.327750 master-0 kubenswrapper[7845]: I0223 13:02:32.327554 7845 scope.go:117] "RemoveContainer" containerID="d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c" Feb 23 13:02:32.327750 master-0 kubenswrapper[7845]: I0223 13:02:32.327538 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef","Type":"ContainerDied","Data":"e51638a9727e021593fede6d0ca2aff58505a6ad0f7e8362eee4ed83b891da4a"} Feb 23 13:02:32.333303 master-0 kubenswrapper[7845]: I0223 13:02:32.333025 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" event={"ID":"3d85c030-4931-42d7-afd6-72b41789aea8","Type":"ContainerStarted","Data":"ef4daf3f8fd941eb445f32d44b62aedb24be8efc19f3026b4ce6e750b8bf5c07"} Feb 23 13:02:32.333363 master-0 kubenswrapper[7845]: I0223 13:02:32.333315 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" event={"ID":"3d85c030-4931-42d7-afd6-72b41789aea8","Type":"ContainerStarted","Data":"e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb"} Feb 23 13:02:32.347865 master-0 kubenswrapper[7845]: I0223 13:02:32.347840 7845 scope.go:117] "RemoveContainer" containerID="d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c" Feb 23 13:02:32.348350 master-0 kubenswrapper[7845]: E0223 13:02:32.348279 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c\": container with ID starting with d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c not found: ID does not exist" containerID="d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c" Feb 23 13:02:32.348350 master-0 kubenswrapper[7845]: I0223 13:02:32.348320 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c"} err="failed to get container status \"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c\": rpc error: code = NotFound desc = could not find container \"d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c\": container with ID starting with d40c27fce4bc149d3b0d78fb3fef61a713470cfd64acf230465c8c79a3a46a3c not found: ID does not exist" Feb 23 13:02:32.406029 master-0 kubenswrapper[7845]: I0223 13:02:32.405798 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:32.406029 master-0 kubenswrapper[7845]: E0223 13:02:32.405955 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:32.406029 master-0 kubenswrapper[7845]: E0223 13:02:32.406000 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:33.405984965 +0000 UTC m=+87.401715836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:33.418674 master-0 kubenswrapper[7845]: I0223 13:02:33.418605 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:33.419560 master-0 kubenswrapper[7845]: E0223 13:02:33.418849 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:33.419560 master-0 kubenswrapper[7845]: E0223 13:02:33.418927 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:35.418902335 +0000 UTC m=+89.414633216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:34.347107 master-0 kubenswrapper[7845]: I0223 13:02:34.347058 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"a3a1e9eb43281abf59115cccbf31f8f12085bcc8375b2e8193cc6ce9106717fd"} Feb 23 13:02:34.347212 master-0 kubenswrapper[7845]: I0223 13:02:34.347124 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df"} Feb 23 13:02:34.348970 master-0 kubenswrapper[7845]: I0223 13:02:34.348939 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" event={"ID":"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab","Type":"ContainerStarted","Data":"968c5b8de553ecf7fbb19cf43e71a23d0fe90242660dc497be69c702fffc77fa"} Feb 23 13:02:34.349035 master-0 kubenswrapper[7845]: I0223 13:02:34.348979 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" event={"ID":"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab","Type":"ContainerStarted","Data":"85989737a7ec929d1e675d9796915ac915caa4c8c4efe4243c5dc73d8739ecbf"} Feb 23 13:02:35.358484 master-0 kubenswrapper[7845]: I0223 13:02:35.358413 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" event={"ID":"3d85c030-4931-42d7-afd6-72b41789aea8","Type":"ContainerStarted","Data":"23f3545fe3ac985d9c6eaafd117cfe2052081891034bfc99e19a78ed966dc30b"} Feb 23 13:02:35.476142 master-0 kubenswrapper[7845]: I0223 13:02:35.476065 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:35.476534 master-0 kubenswrapper[7845]: E0223 13:02:35.476475 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:35.476602 master-0 kubenswrapper[7845]: E0223 13:02:35.476589 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:39.47655948 +0000 UTC m=+93.472290381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:37.372972 master-0 kubenswrapper[7845]: I0223 13:02:37.372900 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" event={"ID":"d32952be-0fe3-431f-aa8f-6a35159fa845","Type":"ContainerStarted","Data":"e36049120c7b7a1b6f305f409b9f243014dca1a45ca5d0d44a737b2995cef2d6"} Feb 23 13:02:38.480001 master-0 kubenswrapper[7845]: I0223 13:02:38.479920 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:38.480720 master-0 kubenswrapper[7845]: I0223 13:02:38.480038 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:39.250354 master-0 kubenswrapper[7845]: I0223 13:02:39.250231 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:39.250699 master-0 kubenswrapper[7845]: I0223 13:02:39.250346 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:39.534843 master-0 kubenswrapper[7845]: I0223 13:02:39.534658 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:39.535836 master-0 kubenswrapper[7845]: E0223 13:02:39.534939 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:39.535836 master-0 kubenswrapper[7845]: E0223 13:02:39.535081 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:02:47.535037206 +0000 UTC m=+101.530768117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:41.479974 master-0 kubenswrapper[7845]: I0223 13:02:41.479868 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:41.480800 master-0 kubenswrapper[7845]: I0223 13:02:41.479987 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:42.249988 master-0 kubenswrapper[7845]: I0223 13:02:42.249927 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:42.250452 master-0 kubenswrapper[7845]: I0223 13:02:42.250403 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:44.480199 master-0 kubenswrapper[7845]: I0223 13:02:44.480056 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:44.481178 master-0 kubenswrapper[7845]: I0223 13:02:44.480309 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:44.481178 master-0 kubenswrapper[7845]: I0223 13:02:44.480740 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:02:44.482106 master-0 kubenswrapper[7845]: I0223 13:02:44.482026 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:44.482230 master-0 kubenswrapper[7845]: I0223 13:02:44.482128 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:44.830379 master-0 kubenswrapper[7845]: E0223 13:02:44.830155 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:44.831364 master-0 kubenswrapper[7845]: I0223 13:02:44.831323 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:45.250512 master-0 kubenswrapper[7845]: I0223 13:02:45.250406 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:45.250696 master-0 kubenswrapper[7845]: I0223 13:02:45.250496 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:45.250696 master-0 kubenswrapper[7845]: I0223 13:02:45.250576 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:02:45.438479 master-0 kubenswrapper[7845]: I0223 13:02:45.438399 7845 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" exitCode=0 Feb 23 13:02:45.438698 master-0 kubenswrapper[7845]: I0223 13:02:45.438521 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6"} Feb 23 13:02:45.438698 master-0 kubenswrapper[7845]: I0223 13:02:45.438574 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"e5215076a24da7b39e84679bbfcb310a83f91ce7772234df3fcbb41f2f595a40"} Feb 23 13:02:45.439272 master-0 kubenswrapper[7845]: I0223 13:02:45.439179 7845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 23 13:02:45.439362 master-0 kubenswrapper[7845]: I0223 13:02:45.439288 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" containerID="cri-o://c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c" gracePeriod=30 Feb 23 13:02:45.439431 master-0 kubenswrapper[7845]: I0223 13:02:45.439293 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:45.439431 master-0 kubenswrapper[7845]: I0223 13:02:45.439416 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:46.447066 master-0 kubenswrapper[7845]: I0223 13:02:46.446940 7845 generic.go:334] "Generic (PLEG): container finished" podID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerID="22927b186dd20d4435230884e99b7e79937083b7c678e2250219b649223f7070" exitCode=0 Feb 23 13:02:46.447868 master-0 kubenswrapper[7845]: I0223 13:02:46.447056 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0","Type":"ContainerDied","Data":"22927b186dd20d4435230884e99b7e79937083b7c678e2250219b649223f7070"} Feb 23 13:02:47.457517 master-0 kubenswrapper[7845]: I0223 13:02:47.457446 7845 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7" exitCode=1 Feb 23 13:02:47.458329 master-0 kubenswrapper[7845]: I0223 13:02:47.457520 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7"} Feb 23 13:02:47.458329 master-0 kubenswrapper[7845]: I0223 13:02:47.457633 7845 scope.go:117] "RemoveContainer" containerID="7d5bdcbce5e54abee67f20bf954b2be91c6e48fe8d182f1c276415bde1e373db" Feb 23 13:02:47.459228 master-0 kubenswrapper[7845]: I0223 13:02:47.458578 7845 scope.go:117] "RemoveContainer" containerID="d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7" Feb 23 13:02:47.480592 master-0 kubenswrapper[7845]: I0223 13:02:47.480521 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:47.480813 master-0 kubenswrapper[7845]: I0223 13:02:47.480633 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:47.558767 master-0 kubenswrapper[7845]: I0223 13:02:47.558630 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:02:47.558998 master-0 kubenswrapper[7845]: E0223 13:02:47.558891 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:02:47.559117 master-0 kubenswrapper[7845]: E0223 13:02:47.559046 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:03:03.55897979 +0000 UTC m=+117.554710701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:02:47.879605 master-0 kubenswrapper[7845]: I0223 13:02:47.879545 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:02:47.964370 master-0 kubenswrapper[7845]: I0223 13:02:47.964297 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock\") pod \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " Feb 23 13:02:47.964597 master-0 kubenswrapper[7845]: I0223 13:02:47.964375 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir\") pod \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " Feb 23 13:02:47.964597 master-0 kubenswrapper[7845]: I0223 13:02:47.964433 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access\") pod \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\" (UID: \"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0\") " Feb 23 13:02:47.964597 master-0 kubenswrapper[7845]: I0223 13:02:47.964460 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock" (OuterVolumeSpecName: "var-lock") pod "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" (UID: "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:47.964597 master-0 kubenswrapper[7845]: I0223 13:02:47.964459 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" (UID: "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:47.965161 master-0 kubenswrapper[7845]: I0223 13:02:47.965108 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:47.965161 master-0 kubenswrapper[7845]: I0223 13:02:47.965148 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:47.969938 master-0 kubenswrapper[7845]: I0223 13:02:47.969845 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" (UID: "05bbed42-d2a0-4d6c-a25f-0d75a37dbab0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:48.067065 master-0 kubenswrapper[7845]: I0223 13:02:48.066912 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bbed42-d2a0-4d6c-a25f-0d75a37dbab0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:48.159192 master-0 kubenswrapper[7845]: E0223 13:02:48.158956 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:02:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6\\\"],\\\"sizeBytes\\\":456273550},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6\\\"],\\\"sizeBytes\\\":455311777},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:48.471058 master-0 kubenswrapper[7845]: I0223 13:02:48.470941 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"611039cddaab573cdf7f17e37d453d213099869d69ffbabcba17a4b655a9aee4"} Feb 23 13:02:48.474374 master-0 kubenswrapper[7845]: I0223 13:02:48.474294 7845 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d" exitCode=1 Feb 23 13:02:48.474553 master-0 kubenswrapper[7845]: I0223 13:02:48.474341 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d"} Feb 23 13:02:48.475379 master-0 kubenswrapper[7845]: I0223 13:02:48.475326 7845 scope.go:117] "RemoveContainer" containerID="177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d" Feb 23 13:02:48.478208 master-0 kubenswrapper[7845]: I0223 13:02:48.478143 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"05bbed42-d2a0-4d6c-a25f-0d75a37dbab0","Type":"ContainerDied","Data":"3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec"} Feb 23 13:02:48.478208 master-0 kubenswrapper[7845]: I0223 13:02:48.478204 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec" Feb 23 13:02:48.478458 master-0 kubenswrapper[7845]: I0223 13:02:48.478270 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:02:48.619493 master-0 kubenswrapper[7845]: I0223 13:02:48.619311 7845 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-ld4gj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 23 13:02:48.619493 master-0 kubenswrapper[7845]: I0223 13:02:48.619426 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 23 13:02:49.301105 master-0 kubenswrapper[7845]: E0223 13:02:49.300769 7845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:49.488493 master-0 kubenswrapper[7845]: I0223 13:02:49.488411 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792"} Feb 23 13:02:50.480577 master-0 kubenswrapper[7845]: I0223 13:02:50.480488 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:50.480935 master-0 kubenswrapper[7845]: I0223 13:02:50.480589 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:50.499909 master-0 kubenswrapper[7845]: I0223 13:02:50.499829 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_04a14e09-67c1-45e9-af34-bccb2fe3757e/installer/0.log" Feb 23 13:02:50.500673 master-0 kubenswrapper[7845]: I0223 13:02:50.499914 7845 generic.go:334] "Generic (PLEG): container finished" podID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerID="88e0e24f4f045d3a42d1ee4cfb99a951aeace5cf2e7bece4bd5f41827f8965f5" exitCode=1 Feb 23 13:02:50.500673 master-0 kubenswrapper[7845]: I0223 13:02:50.499961 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"04a14e09-67c1-45e9-af34-bccb2fe3757e","Type":"ContainerDied","Data":"88e0e24f4f045d3a42d1ee4cfb99a951aeace5cf2e7bece4bd5f41827f8965f5"} Feb 23 13:02:51.939616 master-0 kubenswrapper[7845]: I0223 13:02:51.939541 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_04a14e09-67c1-45e9-af34-bccb2fe3757e/installer/0.log" Feb 23 13:02:51.940211 master-0 kubenswrapper[7845]: I0223 13:02:51.939651 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:52.025575 master-0 kubenswrapper[7845]: I0223 13:02:52.025467 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir\") pod \"04a14e09-67c1-45e9-af34-bccb2fe3757e\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " Feb 23 13:02:52.025864 master-0 kubenswrapper[7845]: I0223 13:02:52.025628 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock\") pod \"04a14e09-67c1-45e9-af34-bccb2fe3757e\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " Feb 23 13:02:52.025864 master-0 kubenswrapper[7845]: I0223 13:02:52.025669 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04a14e09-67c1-45e9-af34-bccb2fe3757e" (UID: "04a14e09-67c1-45e9-af34-bccb2fe3757e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:52.026006 master-0 kubenswrapper[7845]: I0223 13:02:52.025859 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access\") pod \"04a14e09-67c1-45e9-af34-bccb2fe3757e\" (UID: \"04a14e09-67c1-45e9-af34-bccb2fe3757e\") " Feb 23 13:02:52.026006 master-0 kubenswrapper[7845]: I0223 13:02:52.025887 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock" (OuterVolumeSpecName: "var-lock") pod "04a14e09-67c1-45e9-af34-bccb2fe3757e" (UID: "04a14e09-67c1-45e9-af34-bccb2fe3757e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:02:52.026624 master-0 kubenswrapper[7845]: I0223 13:02:52.026541 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:52.026624 master-0 kubenswrapper[7845]: I0223 13:02:52.026597 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04a14e09-67c1-45e9-af34-bccb2fe3757e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:52.031327 master-0 kubenswrapper[7845]: I0223 13:02:52.031268 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04a14e09-67c1-45e9-af34-bccb2fe3757e" (UID: "04a14e09-67c1-45e9-af34-bccb2fe3757e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:02:52.128655 master-0 kubenswrapper[7845]: I0223 13:02:52.128451 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04a14e09-67c1-45e9-af34-bccb2fe3757e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:02:52.513746 master-0 kubenswrapper[7845]: I0223 13:02:52.513684 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_04a14e09-67c1-45e9-af34-bccb2fe3757e/installer/0.log" Feb 23 13:02:52.513746 master-0 kubenswrapper[7845]: I0223 13:02:52.513750 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"04a14e09-67c1-45e9-af34-bccb2fe3757e","Type":"ContainerDied","Data":"c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae"} Feb 23 13:02:52.514094 master-0 kubenswrapper[7845]: I0223 13:02:52.513782 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae" Feb 23 13:02:52.514094 master-0 kubenswrapper[7845]: I0223 13:02:52.513846 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:02:53.022973 master-0 kubenswrapper[7845]: I0223 13:02:53.022849 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:02:53.480691 master-0 kubenswrapper[7845]: I0223 13:02:53.480584 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:53.480691 master-0 kubenswrapper[7845]: I0223 13:02:53.480675 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:55.484893 master-0 kubenswrapper[7845]: I0223 13:02:55.484708 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:02:56.480115 master-0 kubenswrapper[7845]: I0223 13:02:56.480014 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:56.480409 master-0 kubenswrapper[7845]: I0223 13:02:56.480110 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:58.159627 master-0 kubenswrapper[7845]: E0223 13:02:58.159527 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:58.447772 master-0 kubenswrapper[7845]: E0223 13:02:58.447699 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:02:58.485357 master-0 kubenswrapper[7845]: I0223 13:02:58.485201 7845 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:58.618795 master-0 kubenswrapper[7845]: I0223 13:02:58.618736 7845 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-ld4gj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 23 13:02:58.619006 master-0 kubenswrapper[7845]: I0223 13:02:58.618813 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 23 13:02:59.301972 master-0 kubenswrapper[7845]: E0223 13:02:59.301537 7845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:02:59.479850 master-0 kubenswrapper[7845]: I0223 13:02:59.479746 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:02:59.480150 master-0 kubenswrapper[7845]: I0223 13:02:59.479846 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:02:59.566678 master-0 kubenswrapper[7845]: I0223 13:02:59.566520 7845 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" exitCode=0 Feb 23 13:02:59.567062 master-0 kubenswrapper[7845]: I0223 13:02:59.566654 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9"} Feb 23 13:02:59.572467 master-0 kubenswrapper[7845]: I0223 13:02:59.572421 7845 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="b2243c1b0e1a884637ce32ff21a340a8fd2d151e689c0ac21c3f49c0279d57f8" exitCode=0 Feb 23 13:03:01.882992 master-0 kubenswrapper[7845]: I0223 13:03:01.882930 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 23 13:03:01.883434 master-0 kubenswrapper[7845]: I0223 13:03:01.883076 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:03:01.996429 master-0 kubenswrapper[7845]: I0223 13:03:01.996366 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 23 13:03:01.996612 master-0 kubenswrapper[7845]: I0223 13:03:01.996465 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 23 13:03:01.996612 master-0 kubenswrapper[7845]: I0223 13:03:01.996542 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:03:01.996718 master-0 kubenswrapper[7845]: I0223 13:03:01.996673 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:03:01.996864 master-0 kubenswrapper[7845]: I0223 13:03:01.996824 7845 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:03:01.996902 master-0 kubenswrapper[7845]: I0223 13:03:01.996867 7845 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:03:02.215393 master-0 kubenswrapper[7845]: I0223 13:03:02.215315 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 23 13:03:02.215861 master-0 kubenswrapper[7845]: I0223 13:03:02.215820 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:03:02.480897 master-0 kubenswrapper[7845]: I0223 13:03:02.480695 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:02.480897 master-0 kubenswrapper[7845]: I0223 13:03:02.480790 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:02.599023 master-0 kubenswrapper[7845]: I0223 13:03:02.598937 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 23 13:03:02.599023 master-0 kubenswrapper[7845]: I0223 13:03:02.598992 7845 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="b58d0f68f1bce11a0ca3232dc9f5a8f1bbd2f9babb595ae60e80f32714fa923e" exitCode=137 Feb 23 13:03:02.599376 master-0 kubenswrapper[7845]: I0223 13:03:02.599061 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:03:03.631614 master-0 kubenswrapper[7845]: I0223 13:03:03.631496 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:03:03.632464 master-0 kubenswrapper[7845]: E0223 13:03:03.631663 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:03:03.632464 master-0 kubenswrapper[7845]: E0223 13:03:03.631783 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:03:35.631745429 +0000 UTC m=+149.627476340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:03:04.614004 master-0 kubenswrapper[7845]: I0223 13:03:04.613939 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1860bead-61b8-4678-b583-c13c79575ef4/installer/0.log" Feb 23 13:03:04.614281 master-0 kubenswrapper[7845]: I0223 13:03:04.614013 7845 generic.go:334] "Generic (PLEG): container finished" podID="1860bead-61b8-4678-b583-c13c79575ef4" containerID="923861d3e14f9f1ed180c6fc4f602226ba1fa39cb2d6ada3746794e2192c190f" exitCode=1 Feb 23 13:03:05.480347 master-0 kubenswrapper[7845]: I0223 13:03:05.480283 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:05.481093 master-0 kubenswrapper[7845]: I0223 13:03:05.480352 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:05.762554 master-0 kubenswrapper[7845]: E0223 13:03:05.762311 7845 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1896e1c7d061e7a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:31.727687584 +0000 UTC m=+85.723418445,LastTimestamp:2026-02-23 13:02:31.727687584 +0000 UTC m=+85.723418445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:03:05.901955 master-0 kubenswrapper[7845]: E0223 13:03:05.901881 7845 projected.go:194] Error preparing data for projected volume kube-api-access-kpbtg for pod openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:03:05.902155 master-0 kubenswrapper[7845]: E0223 13:03:05.902039 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:03:06.401997758 +0000 UTC m=+120.397728669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kpbtg" (UniqueName: "kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:03:06.472656 master-0 kubenswrapper[7845]: I0223 13:03:06.472524 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:03:06.630142 master-0 kubenswrapper[7845]: I0223 13:03:06.630031 7845 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="cde99f61030d5fde6382d6afa69998ae8c2f31acfb6e6f4017c5ade4d9e4754a" exitCode=0 Feb 23 13:03:07.641113 master-0 kubenswrapper[7845]: I0223 13:03:07.640898 7845 generic.go:334] "Generic (PLEG): container finished" podID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerID="f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259" exitCode=0 Feb 23 13:03:08.160142 master-0 kubenswrapper[7845]: E0223 13:03:08.160057 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:08.480994 master-0 kubenswrapper[7845]: I0223 13:03:08.480851 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:08.481342 master-0 kubenswrapper[7845]: I0223 13:03:08.480998 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:08.485141 master-0 kubenswrapper[7845]: I0223 13:03:08.485072 7845 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:08.619422 master-0 kubenswrapper[7845]: I0223 13:03:08.619310 7845 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-ld4gj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 23 13:03:08.619422 master-0 kubenswrapper[7845]: I0223 13:03:08.619391 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 23 13:03:09.303031 master-0 kubenswrapper[7845]: E0223 13:03:09.302894 7845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:09.656960 master-0 kubenswrapper[7845]: I0223 13:03:09.656845 7845 generic.go:334] "Generic (PLEG): container finished" podID="85958edf-e3da-4704-8f09-cf049101f2e6" containerID="bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f" exitCode=0 Feb 23 13:03:11.480423 master-0 kubenswrapper[7845]: I0223 13:03:11.480316 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:11.481228 master-0 kubenswrapper[7845]: I0223 13:03:11.480440 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:12.574765 master-0 kubenswrapper[7845]: E0223 13:03:12.574577 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:03:12.683298 master-0 kubenswrapper[7845]: I0223 13:03:12.683197 7845 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c" exitCode=0 Feb 23 13:03:13.700763 master-0 kubenswrapper[7845]: I0223 13:03:13.700644 7845 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" exitCode=0 Feb 23 13:03:14.480876 master-0 kubenswrapper[7845]: I0223 13:03:14.479770 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:14.480876 master-0 kubenswrapper[7845]: I0223 13:03:14.479905 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:17.480360 master-0 kubenswrapper[7845]: I0223 13:03:17.480285 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:17.480981 master-0 kubenswrapper[7845]: I0223 13:03:17.480389 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:17.730160 master-0 kubenswrapper[7845]: I0223 13:03:17.730059 7845 generic.go:334] "Generic (PLEG): container finished" podID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerID="f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62" exitCode=0 Feb 23 13:03:18.161415 master-0 kubenswrapper[7845]: E0223 13:03:18.161330 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:18.485868 master-0 kubenswrapper[7845]: I0223 13:03:18.485753 7845 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:19.304148 master-0 kubenswrapper[7845]: E0223 13:03:19.304050 7845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:20.480668 master-0 kubenswrapper[7845]: I0223 13:03:20.480578 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:20.481824 master-0 kubenswrapper[7845]: I0223 13:03:20.480664 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:22.764028 master-0 kubenswrapper[7845]: I0223 13:03:22.763839 7845 generic.go:334] "Generic (PLEG): container finished" podID="0a80d5ac-27ce-4ba9-809e-28c86b80163b" containerID="1c78631b268af69806ac6e44c535cf690809e56173b2809b3ab9b30ce469dd12" exitCode=0 Feb 23 13:03:22.765896 master-0 kubenswrapper[7845]: I0223 13:03:22.765835 7845 generic.go:334] "Generic (PLEG): container finished" podID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" containerID="8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956" exitCode=0 Feb 23 13:03:23.480381 master-0 kubenswrapper[7845]: I0223 13:03:23.480287 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:23.480381 master-0 kubenswrapper[7845]: I0223 13:03:23.480376 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:26.480690 master-0 kubenswrapper[7845]: I0223 13:03:26.480601 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:26.481737 master-0 kubenswrapper[7845]: I0223 13:03:26.480687 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:28.162630 master-0 kubenswrapper[7845]: E0223 13:03:28.162515 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:28.162630 master-0 kubenswrapper[7845]: E0223 13:03:28.162572 7845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:03:29.304898 master-0 kubenswrapper[7845]: E0223 13:03:29.304776 7845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:29.304898 master-0 kubenswrapper[7845]: I0223 13:03:29.304864 7845 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:03:29.481339 master-0 kubenswrapper[7845]: I0223 13:03:29.481207 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:29.481699 master-0 kubenswrapper[7845]: I0223 13:03:29.481355 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:29.943826 master-0 kubenswrapper[7845]: I0223 13:03:29.943725 7845 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-drk2j container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Feb 23 13:03:29.943826 master-0 kubenswrapper[7845]: I0223 13:03:29.943824 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Feb 23 13:03:31.779764 master-0 kubenswrapper[7845]: I0223 13:03:31.779598 7845 status_manager.go:851] "Failed to get status for pod" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" pod="openshift-kube-controller-manager/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 23 13:03:32.460089 master-0 kubenswrapper[7845]: E0223 13:03:32.460016 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:03:32.460089 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4" Netns:"/var/run/netns/ad748871-c6a6-4df1-a2ce-7fcd2d6e42d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.460089 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.460089 master-0 kubenswrapper[7845]: > Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: E0223 13:03:32.460114 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4" Netns:"/var/run/netns/ad748871-c6a6-4df1-a2ce-7fcd2d6e42d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: > pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: E0223 13:03:32.460146 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4" Netns:"/var/run/netns/ad748871-c6a6-4df1-a2ce-7fcd2d6e42d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: > pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:03:32.460416 master-0 kubenswrapper[7845]: E0223 13:03:32.460260 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-59b498fcfb-xltpx_openshift-insights(70ccda5f-ca1a-4fce-b77f-a1132f85635a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-59b498fcfb-xltpx_openshift-insights(70ccda5f-ca1a-4fce-b77f-a1132f85635a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4\\\" Netns:\\\"/var/run/netns/ad748871-c6a6-4df1-a2ce-7fcd2d6e42d1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=88aba676bdbc511efd55b02a7d61feb93a2d8be77de39fc5b1adda143274b3f4;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" podUID="70ccda5f-ca1a-4fce-b77f-a1132f85635a" Feb 23 13:03:32.481780 master-0 kubenswrapper[7845]: I0223 13:03:32.481127 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:32.481780 master-0 kubenswrapper[7845]: I0223 13:03:32.481216 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:32.511644 master-0 kubenswrapper[7845]: E0223 13:03:32.511552 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:03:32.511644 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342" Netns:"/var/run/netns/112a21fa-3fe1-462b-8632-993a6c8eb398" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.511644 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.511644 master-0 kubenswrapper[7845]: > Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: E0223 13:03:32.511687 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342" Netns:"/var/run/netns/112a21fa-3fe1-462b-8632-993a6c8eb398" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: > pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: E0223 13:03:32.511733 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342" Netns:"/var/run/netns/112a21fa-3fe1-462b-8632-993a6c8eb398" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.511849 master-0 kubenswrapper[7845]: > pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:03:32.512059 master-0 kubenswrapper[7845]: E0223 13:03:32.511909 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager(d91fa6bb-0c88-4930-884a-67e840d58a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager(d91fa6bb-0c88-4930-884a-67e840d58a9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342\\\" Netns:\\\"/var/run/netns/112a21fa-3fe1-462b-8632-993a6c8eb398\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=4ddc09240d0be35bebbb338d160c9a10839d97eccb352288c60fa143dd1fa342;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" podUID="d91fa6bb-0c88-4930-884a-67e840d58a9f" Feb 23 13:03:32.601853 master-0 kubenswrapper[7845]: E0223 13:03:32.601791 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:03:32.601853 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a" Netns:"/var/run/netns/241a4660-68f0-4c9f-b874-aa0a38ff03bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.601853 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.601853 master-0 kubenswrapper[7845]: > Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: E0223 13:03:32.601896 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a" Netns:"/var/run/netns/241a4660-68f0-4c9f-b874-aa0a38ff03bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: E0223 13:03:32.601928 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a" Netns:"/var/run/netns/241a4660-68f0-4c9f-b874-aa0a38ff03bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:03:32.602090 master-0 kubenswrapper[7845]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:03:32.602452 master-0 kubenswrapper[7845]: E0223 13:03:32.602040 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator(f88d6ed3-c0a6-4eef-b80c-417994cf69b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator(f88d6ed3-c0a6-4eef-b80c-417994cf69b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a\\\" Netns:\\\"/var/run/netns/241a4660-68f0-4c9f-b874-aa0a38ff03bc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=1528538299f54e946bd6aec4a206275e7e6fdcad9d73d9f4a1bd75df50a6673a;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Feb 23 13:03:32.826786 master-0 kubenswrapper[7845]: I0223 13:03:32.826335 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:03:32.826786 master-0 kubenswrapper[7845]: I0223 13:03:32.826416 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:03:32.826786 master-0 kubenswrapper[7845]: I0223 13:03:32.826453 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:03:32.827894 master-0 kubenswrapper[7845]: I0223 13:03:32.827009 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:03:32.827894 master-0 kubenswrapper[7845]: I0223 13:03:32.827329 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:03:32.828549 master-0 kubenswrapper[7845]: I0223 13:03:32.828215 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:03:33.834973 master-0 kubenswrapper[7845]: I0223 13:03:33.834894 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/0.log" Feb 23 13:03:33.835952 master-0 kubenswrapper[7845]: I0223 13:03:33.835868 7845 generic.go:334] "Generic (PLEG): container finished" podID="3d82f223-e28b-4917-8513-3ca5c6e9bff7" containerID="c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e" exitCode=1 Feb 23 13:03:34.845460 master-0 kubenswrapper[7845]: I0223 13:03:34.845418 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/0.log" Feb 23 13:03:34.846342 master-0 kubenswrapper[7845]: I0223 13:03:34.846299 7845 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df" exitCode=1 Feb 23 13:03:35.480418 master-0 kubenswrapper[7845]: I0223 13:03:35.480206 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:35.480418 master-0 kubenswrapper[7845]: I0223 13:03:35.480325 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:35.697819 master-0 kubenswrapper[7845]: I0223 13:03:35.697752 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:03:35.698235 master-0 kubenswrapper[7845]: E0223 13:03:35.697920 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:03:35.698509 master-0 kubenswrapper[7845]: E0223 13:03:35.698479 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:04:39.6984386 +0000 UTC m=+213.694169501 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:03:36.219374 master-0 kubenswrapper[7845]: E0223 13:03:36.219309 7845 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:03:36.220188 master-0 kubenswrapper[7845]: E0223 13:03:36.219838 7845 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 23 13:03:36.220188 master-0 kubenswrapper[7845]: I0223 13:03:36.219873 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:03:36.220791 master-0 kubenswrapper[7845]: I0223 13:03:36.220729 7845 scope.go:117] "RemoveContainer" containerID="f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62" Feb 23 13:03:36.233324 master-0 kubenswrapper[7845]: I0223 13:03:36.233225 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:03:38.480795 master-0 kubenswrapper[7845]: I0223 13:03:38.480709 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:38.481821 master-0 kubenswrapper[7845]: I0223 13:03:38.480807 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:39.305782 master-0 kubenswrapper[7845]: E0223 13:03:39.305635 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 23 13:03:39.765399 master-0 kubenswrapper[7845]: E0223 13:03:39.765131 7845 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-autoscaler-operator-86b8dc6d6-6b92p.1896e1c7d5cf6071 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:cluster-autoscaler-operator-86b8dc6d6-6b92p,UID:3d85c030-4931-42d7-afd6-72b41789aea8,APIVersion:v1,ResourceVersion:9466,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Started,Message:Started container kube-rbac-proxy,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:31.818748017 +0000 UTC m=+85.814478888,LastTimestamp:2026-02-23 13:02:31.818748017 +0000 UTC m=+85.814478888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:03:40.476034 master-0 kubenswrapper[7845]: E0223 13:03:40.475922 7845 projected.go:194] Error preparing data for projected volume kube-api-access-kpbtg for pod openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:03:40.476034 master-0 kubenswrapper[7845]: E0223 13:03:40.476044 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:03:41.476014154 +0000 UTC m=+155.471745065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kpbtg" (UniqueName: "kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:03:41.476379 master-0 kubenswrapper[7845]: I0223 13:03:41.476216 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:03:41.480478 master-0 kubenswrapper[7845]: I0223 13:03:41.480413 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:41.480633 master-0 kubenswrapper[7845]: I0223 13:03:41.480487 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:44.479830 master-0 kubenswrapper[7845]: I0223 13:03:44.479727 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:44.479830 master-0 kubenswrapper[7845]: I0223 13:03:44.479826 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:46.925903 master-0 kubenswrapper[7845]: I0223 13:03:46.925828 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2d8a9026-ee0a-44c4-9c90-cd863f5461dd/installer/0.log" Feb 23 13:03:46.926763 master-0 kubenswrapper[7845]: I0223 13:03:46.925904 7845 generic.go:334] "Generic (PLEG): container finished" podID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerID="76debd76d1c83d2501b62235b0e22ba16bdbcca50bf40d8506d768b4e775ec89" exitCode=1 Feb 23 13:03:47.480038 master-0 kubenswrapper[7845]: I0223 13:03:47.479959 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:47.480038 master-0 kubenswrapper[7845]: I0223 13:03:47.480046 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:48.462880 master-0 kubenswrapper[7845]: E0223 13:03:48.462501 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:03:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6\\\"],\\\"sizeBytes\\\":456273550},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6\\\"],\\\"sizeBytes\\\":455311777},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:48.942101 master-0 kubenswrapper[7845]: I0223 13:03:48.942014 7845 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="611039cddaab573cdf7f17e37d453d213099869d69ffbabcba17a4b655a9aee4" exitCode=1 Feb 23 13:03:49.507469 master-0 kubenswrapper[7845]: E0223 13:03:49.507347 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 23 13:03:50.479971 master-0 kubenswrapper[7845]: I0223 13:03:50.479844 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:50.479971 master-0 kubenswrapper[7845]: I0223 13:03:50.479957 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:53.480471 master-0 kubenswrapper[7845]: I0223 13:03:53.480349 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:53.480471 master-0 kubenswrapper[7845]: I0223 13:03:53.480442 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:56.480853 master-0 kubenswrapper[7845]: I0223 13:03:56.480799 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:56.481911 master-0 kubenswrapper[7845]: I0223 13:03:56.481535 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:58.463201 master-0 kubenswrapper[7845]: E0223 13:03:58.463085 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:03:59.480227 master-0 kubenswrapper[7845]: I0223 13:03:59.480068 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:03:59.480227 master-0 kubenswrapper[7845]: I0223 13:03:59.480153 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:03:59.908863 master-0 kubenswrapper[7845]: E0223 13:03:59.908639 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 23 13:04:02.480138 master-0 kubenswrapper[7845]: I0223 13:04:02.480054 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:04:02.480807 master-0 kubenswrapper[7845]: I0223 13:04:02.480165 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:04:05.480332 master-0 kubenswrapper[7845]: I0223 13:04:05.480211 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:04:05.481223 master-0 kubenswrapper[7845]: I0223 13:04:05.480343 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:04:08.463572 master-0 kubenswrapper[7845]: E0223 13:04:08.463457 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:08.480776 master-0 kubenswrapper[7845]: I0223 13:04:08.480716 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:04:08.481057 master-0 kubenswrapper[7845]: I0223 13:04:08.481007 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:04:10.236532 master-0 kubenswrapper[7845]: E0223 13:04:10.236410 7845 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:04:10.237660 master-0 kubenswrapper[7845]: E0223 13:04:10.236704 7845 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 23 13:04:10.250213 master-0 kubenswrapper[7845]: I0223 13:04:10.250133 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:04:10.710486 master-0 kubenswrapper[7845]: E0223 13:04:10.710357 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 23 13:04:11.480784 master-0 kubenswrapper[7845]: I0223 13:04:11.480675 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 23 13:04:11.480784 master-0 kubenswrapper[7845]: I0223 13:04:11.480760 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 23 13:04:13.769511 master-0 kubenswrapper[7845]: E0223 13:04:13.769294 7845 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-autoscaler-operator-86b8dc6d6-6b92p.1896e1c7d72a1bfe openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:cluster-autoscaler-operator-86b8dc6d6-6b92p,UID:3d85c030-4931-42d7-afd6-72b41789aea8,APIVersion:v1,ResourceVersion:9466,FieldPath:spec.containers{cluster-autoscaler-operator},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:31.841471486 +0000 UTC m=+85.837202357,LastTimestamp:2026-02-23 13:02:31.841471486 +0000 UTC m=+85.837202357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:04:15.480081 master-0 kubenswrapper[7845]: I0223 13:04:15.479929 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:15.480081 master-0 kubenswrapper[7845]: E0223 13:04:15.480000 7845 projected.go:194] Error preparing data for projected volume kube-api-access-kpbtg for pod openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:04:15.481150 master-0 kubenswrapper[7845]: I0223 13:04:15.480071 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:15.481150 master-0 kubenswrapper[7845]: E0223 13:04:15.480118 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:04:17.48008535 +0000 UTC m=+191.475816261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kpbtg" (UniqueName: "kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:04:17.497415 master-0 kubenswrapper[7845]: I0223 13:04:17.497288 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:04:18.464529 master-0 kubenswrapper[7845]: E0223 13:04:18.464457 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:18.479676 master-0 kubenswrapper[7845]: I0223 13:04:18.479608 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:18.479826 master-0 kubenswrapper[7845]: I0223 13:04:18.479698 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:19.744148 master-0 kubenswrapper[7845]: I0223 13:04:19.743874 7845 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-bckd6 container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Feb 23 13:04:19.744148 master-0 kubenswrapper[7845]: I0223 13:04:19.744053 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podUID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" Feb 23 13:04:19.744148 master-0 kubenswrapper[7845]: I0223 13:04:19.744073 7845 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-bckd6 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Feb 23 13:04:19.744148 master-0 kubenswrapper[7845]: I0223 13:04:19.744153 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podUID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Feb 23 13:04:20.142897 master-0 kubenswrapper[7845]: I0223 13:04:20.142758 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/0.log" Feb 23 13:04:20.143541 master-0 kubenswrapper[7845]: I0223 13:04:20.143487 7845 generic.go:334] "Generic (PLEG): container finished" podID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerID="b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e" exitCode=1 Feb 23 13:04:21.480419 master-0 kubenswrapper[7845]: I0223 13:04:21.480328 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:21.480419 master-0 kubenswrapper[7845]: I0223 13:04:21.480399 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:22.310836 master-0 kubenswrapper[7845]: E0223 13:04:22.310722 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="3.2s" Feb 23 13:04:24.173645 master-0 kubenswrapper[7845]: I0223 13:04:24.173551 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-gswst_dcd03d6e-4c8c-400a-8001-343aaeeca93b/ingress-operator/0.log" Feb 23 13:04:24.174483 master-0 kubenswrapper[7845]: I0223 13:04:24.173661 7845 generic.go:334] "Generic (PLEG): container finished" podID="dcd03d6e-4c8c-400a-8001-343aaeeca93b" containerID="d573c3e0e8ebb6202d8c5ebe9e0d85b859c5927b89cbdd3a205e10371f242b28" exitCode=1 Feb 23 13:04:24.480415 master-0 kubenswrapper[7845]: I0223 13:04:24.480345 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:24.480596 master-0 kubenswrapper[7845]: I0223 13:04:24.480446 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:27.480109 master-0 kubenswrapper[7845]: I0223 13:04:27.480018 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:27.480610 master-0 kubenswrapper[7845]: I0223 13:04:27.480157 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:28.197886 master-0 kubenswrapper[7845]: I0223 13:04:28.197778 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/0.log" Feb 23 13:04:28.197886 master-0 kubenswrapper[7845]: I0223 13:04:28.197854 7845 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6" exitCode=1 Feb 23 13:04:28.200450 master-0 kubenswrapper[7845]: I0223 13:04:28.200392 7845 generic.go:334] "Generic (PLEG): container finished" podID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerID="611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a" exitCode=0 Feb 23 13:04:28.465792 master-0 kubenswrapper[7845]: E0223 13:04:28.465649 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:28.465792 master-0 kubenswrapper[7845]: E0223 13:04:28.465701 7845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:04:29.257955 master-0 kubenswrapper[7845]: I0223 13:04:29.257840 7845 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-28zcz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:04:29.257955 master-0 kubenswrapper[7845]: I0223 13:04:29.257941 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" podUID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 23 13:04:29.258887 master-0 kubenswrapper[7845]: I0223 13:04:29.257868 7845 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-28zcz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:04:29.258887 master-0 kubenswrapper[7845]: I0223 13:04:29.258037 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" podUID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 23 13:04:29.743450 master-0 kubenswrapper[7845]: I0223 13:04:29.743344 7845 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-bckd6 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Feb 23 13:04:29.743733 master-0 kubenswrapper[7845]: I0223 13:04:29.743444 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podUID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Feb 23 13:04:29.942765 master-0 kubenswrapper[7845]: I0223 13:04:29.942689 7845 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-drk2j container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Feb 23 13:04:29.943037 master-0 kubenswrapper[7845]: I0223 13:04:29.942780 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Feb 23 13:04:30.480461 master-0 kubenswrapper[7845]: I0223 13:04:30.480353 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:30.480461 master-0 kubenswrapper[7845]: I0223 13:04:30.480461 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:31.782666 master-0 kubenswrapper[7845]: I0223 13:04:31.782572 7845 status_manager.go:851] "Failed to get status for pod" podUID="d32952be-0fe3-431f-aa8f-6a35159fa845" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cloud-credential-operator-6968c58f46-gss4v)" Feb 23 13:04:33.480750 master-0 kubenswrapper[7845]: I0223 13:04:33.480561 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:33.480750 master-0 kubenswrapper[7845]: I0223 13:04:33.480659 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: E0223 13:04:33.530034 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649" Netns:"/var/run/netns/f74d874f-3f1b-49b5-8538-38d9599c9d6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: > Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: E0223 13:04:33.530128 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649" Netns:"/var/run/netns/f74d874f-3f1b-49b5-8538-38d9599c9d6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: > pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: E0223 13:04:33.530152 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649" Netns:"/var/run/netns/f74d874f-3f1b-49b5-8538-38d9599c9d6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: > pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:04:33.531371 master-0 kubenswrapper[7845]: E0223 13:04:33.530229 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-59b498fcfb-xltpx_openshift-insights(70ccda5f-ca1a-4fce-b77f-a1132f85635a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-59b498fcfb-xltpx_openshift-insights(70ccda5f-ca1a-4fce-b77f-a1132f85635a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-xltpx_openshift-insights_70ccda5f-ca1a-4fce-b77f-a1132f85635a_0(9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649): error adding pod openshift-insights_insights-operator-59b498fcfb-xltpx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649\\\" Netns:\\\"/var/run/netns/f74d874f-3f1b-49b5-8538-38d9599c9d6a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-xltpx;K8S_POD_INFRA_CONTAINER_ID=9fb901008018e36aa8aa97d1f17f74a7334cc5c3994a9ec003ccf2b74d4cb649;K8S_POD_UID=70ccda5f-ca1a-4fce-b77f-a1132f85635a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-xltpx] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-xltpx/70ccda5f-ca1a-4fce-b77f-a1132f85635a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-xltpx in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-xltpx?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" podUID="70ccda5f-ca1a-4fce-b77f-a1132f85635a" Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: E0223 13:04:33.562401 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e" Netns:"/var/run/netns/1ad72106-a407-4121-a0c2-1e21b26bbc65" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: > Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: E0223 13:04:33.562475 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e" Netns:"/var/run/netns/1ad72106-a407-4121-a0c2-1e21b26bbc65" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: E0223 13:04:33.562497 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e" Netns:"/var/run/netns/1ad72106-a407-4121-a0c2-1e21b26bbc65" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.562580 master-0 kubenswrapper[7845]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:04:33.563336 master-0 kubenswrapper[7845]: E0223 13:04:33.562570 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator(f88d6ed3-c0a6-4eef-b80c-417994cf69b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator(f88d6ed3-c0a6-4eef-b80c-417994cf69b0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator_f88d6ed3-c0a6-4eef-b80c-417994cf69b0_0(40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e\\\" Netns:\\\"/var/run/netns/1ad72106-a407-4121-a0c2-1e21b26bbc65\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-ck859;K8S_POD_INFRA_CONTAINER_ID=40601b26bccdfeafed12752fa3c233edc74f422a11a7c8372e3df04e082b092e;K8S_POD_UID=f88d6ed3-c0a6-4eef-b80c-417994cf69b0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859/f88d6ed3-c0a6-4eef-b80c-417994cf69b0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-ck859 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Feb 23 13:04:33.660431 master-0 kubenswrapper[7845]: E0223 13:04:33.660337 7845 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:04:33.660431 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa" Netns:"/var/run/netns/f09f72da-0687-40a6-927d-2b5d6e145238" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.660431 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.660431 master-0 kubenswrapper[7845]: > Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: E0223 13:04:33.660441 7845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa" Netns:"/var/run/netns/f09f72da-0687-40a6-927d-2b5d6e145238" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: > pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: E0223 13:04:33.660487 7845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa" Netns:"/var/run/netns/f09f72da-0687-40a6-927d-2b5d6e145238" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: > pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:04:33.660726 master-0 kubenswrapper[7845]: E0223 13:04:33.660597 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager(d91fa6bb-0c88-4930-884a-67e840d58a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager(d91fa6bb-0c88-4930-884a-67e840d58a9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-596f79dd6f-mjhwm_openshift-operator-lifecycle-manager_d91fa6bb-0c88-4930-884a-67e840d58a9f_0(f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa): error adding pod openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa\\\" Netns:\\\"/var/run/netns/f09f72da-0687-40a6-927d-2b5d6e145238\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-596f79dd6f-mjhwm;K8S_POD_INFRA_CONTAINER_ID=f8b26ba2d0cfb6f502868ce62dc420c1786726d54cec962fd50b897883eceafa;K8S_POD_UID=d91fa6bb-0c88-4930-884a-67e840d58a9f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm/d91fa6bb-0c88-4930-884a-67e840d58a9f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-596f79dd6f-mjhwm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-596f79dd6f-mjhwm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" podUID="d91fa6bb-0c88-4930-884a-67e840d58a9f" Feb 23 13:04:34.733779 master-0 kubenswrapper[7845]: E0223 13:04:34.733608 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[auth-proxy-config kube-api-access-kpbtg], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" Feb 23 13:04:35.241860 master-0 kubenswrapper[7845]: I0223 13:04:35.241768 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:04:35.511747 master-0 kubenswrapper[7845]: E0223 13:04:35.511532 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 23 13:04:36.480418 master-0 kubenswrapper[7845]: I0223 13:04:36.480330 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:36.480964 master-0 kubenswrapper[7845]: I0223 13:04:36.480428 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:39.258651 master-0 kubenswrapper[7845]: I0223 13:04:39.258532 7845 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-28zcz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:04:39.258651 master-0 kubenswrapper[7845]: I0223 13:04:39.258648 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" podUID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 23 13:04:39.259719 master-0 kubenswrapper[7845]: I0223 13:04:39.258678 7845 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-28zcz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:04:39.259719 master-0 kubenswrapper[7845]: I0223 13:04:39.258775 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" podUID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.6:8080/healthz\": dial tcp 10.128.0.6:8080: connect: connection refused" Feb 23 13:04:39.480800 master-0 kubenswrapper[7845]: I0223 13:04:39.480691 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:39.481136 master-0 kubenswrapper[7845]: I0223 13:04:39.480814 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:39.743302 master-0 kubenswrapper[7845]: I0223 13:04:39.743105 7845 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-bckd6 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Feb 23 13:04:39.743302 master-0 kubenswrapper[7845]: I0223 13:04:39.743145 7845 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-bckd6 container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Feb 23 13:04:39.743795 master-0 kubenswrapper[7845]: I0223 13:04:39.743305 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podUID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Feb 23 13:04:39.743795 master-0 kubenswrapper[7845]: I0223 13:04:39.743389 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" podUID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" Feb 23 13:04:39.800742 master-0 kubenswrapper[7845]: I0223 13:04:39.800602 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:04:39.801087 master-0 kubenswrapper[7845]: E0223 13:04:39.800906 7845 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: configmap "kube-rbac-proxy" not found Feb 23 13:04:39.801087 master-0 kubenswrapper[7845]: E0223 13:04:39.801064 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:41.801023934 +0000 UTC m=+335.796754845 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : configmap "kube-rbac-proxy" not found Feb 23 13:04:42.291757 master-0 kubenswrapper[7845]: I0223 13:04:42.291649 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/0.log" Feb 23 13:04:42.291757 master-0 kubenswrapper[7845]: I0223 13:04:42.291761 7845 generic.go:334] "Generic (PLEG): container finished" podID="c0d6008c-6e09-4e61-83a5-60456ca90e1e" containerID="49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26" exitCode=1 Feb 23 13:04:42.479723 master-0 kubenswrapper[7845]: I0223 13:04:42.479612 7845 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:04:42.480049 master-0 kubenswrapper[7845]: I0223 13:04:42.479731 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:43.301497 master-0 kubenswrapper[7845]: I0223 13:04:43.301426 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/1.log" Feb 23 13:04:43.302823 master-0 kubenswrapper[7845]: I0223 13:04:43.302773 7845 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475" exitCode=255 Feb 23 13:04:44.254054 master-0 kubenswrapper[7845]: E0223 13:04:44.253894 7845 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:04:44.254436 master-0 kubenswrapper[7845]: E0223 13:04:44.254182 7845 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 23 13:04:44.254436 master-0 kubenswrapper[7845]: I0223 13:04:44.254225 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:04:44.255363 master-0 kubenswrapper[7845]: I0223 13:04:44.255072 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:04:44.255534 master-0 kubenswrapper[7845]: I0223 13:04:44.255454 7845 scope.go:117] "RemoveContainer" containerID="d573c3e0e8ebb6202d8c5ebe9e0d85b859c5927b89cbdd3a205e10371f242b28" Feb 23 13:04:44.258011 master-0 kubenswrapper[7845]: I0223 13:04:44.257949 7845 scope.go:117] "RemoveContainer" containerID="b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e" Feb 23 13:04:44.259584 master-0 kubenswrapper[7845]: I0223 13:04:44.259540 7845 scope.go:117] "RemoveContainer" containerID="bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df" Feb 23 13:04:44.260140 master-0 kubenswrapper[7845]: I0223 13:04:44.260081 7845 scope.go:117] "RemoveContainer" containerID="c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e" Feb 23 13:04:44.261350 master-0 kubenswrapper[7845]: I0223 13:04:44.261281 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:04:44.261709 master-0 kubenswrapper[7845]: I0223 13:04:44.261643 7845 scope.go:117] "RemoveContainer" containerID="8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956" Feb 23 13:04:44.262554 master-0 kubenswrapper[7845]: I0223 13:04:44.262523 7845 scope.go:117] "RemoveContainer" containerID="c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475" Feb 23 13:04:44.262863 master-0 kubenswrapper[7845]: I0223 13:04:44.262802 7845 scope.go:117] "RemoveContainer" containerID="9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6" Feb 23 13:04:44.263732 master-0 kubenswrapper[7845]: I0223 13:04:44.263686 7845 scope.go:117] "RemoveContainer" containerID="bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f" Feb 23 13:04:44.263912 master-0 kubenswrapper[7845]: I0223 13:04:44.263857 7845 scope.go:117] "RemoveContainer" containerID="611039cddaab573cdf7f17e37d453d213099869d69ffbabcba17a4b655a9aee4" Feb 23 13:04:44.265363 master-0 kubenswrapper[7845]: I0223 13:04:44.265318 7845 scope.go:117] "RemoveContainer" containerID="49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26" Feb 23 13:04:44.266229 master-0 kubenswrapper[7845]: I0223 13:04:44.266181 7845 scope.go:117] "RemoveContainer" containerID="611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a" Feb 23 13:04:44.266484 master-0 kubenswrapper[7845]: I0223 13:04:44.266426 7845 scope.go:117] "RemoveContainer" containerID="1c78631b268af69806ac6e44c535cf690809e56173b2809b3ab9b30ce469dd12" Feb 23 13:04:44.267157 master-0 kubenswrapper[7845]: I0223 13:04:44.267049 7845 scope.go:117] "RemoveContainer" containerID="cde99f61030d5fde6382d6afa69998ae8c2f31acfb6e6f4017c5ade4d9e4754a" Feb 23 13:04:44.267994 master-0 kubenswrapper[7845]: I0223 13:04:44.267927 7845 scope.go:117] "RemoveContainer" containerID="f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259" Feb 23 13:04:44.269304 master-0 kubenswrapper[7845]: I0223 13:04:44.269190 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:04:44.769857 master-0 kubenswrapper[7845]: I0223 13:04:44.769830 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1860bead-61b8-4678-b583-c13c79575ef4/installer/0.log" Feb 23 13:04:44.774561 master-0 kubenswrapper[7845]: I0223 13:04:44.769905 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:04:44.816740 master-0 kubenswrapper[7845]: I0223 13:04:44.816611 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2d8a9026-ee0a-44c4-9c90-cd863f5461dd/installer/0.log" Feb 23 13:04:44.816740 master-0 kubenswrapper[7845]: I0223 13:04:44.816673 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:04:44.880227 master-0 kubenswrapper[7845]: I0223 13:04:44.879948 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock\") pod \"1860bead-61b8-4678-b583-c13c79575ef4\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " Feb 23 13:04:44.880227 master-0 kubenswrapper[7845]: I0223 13:04:44.880033 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir\") pod \"1860bead-61b8-4678-b583-c13c79575ef4\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " Feb 23 13:04:44.880227 master-0 kubenswrapper[7845]: I0223 13:04:44.880112 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access\") pod \"1860bead-61b8-4678-b583-c13c79575ef4\" (UID: \"1860bead-61b8-4678-b583-c13c79575ef4\") " Feb 23 13:04:44.882092 master-0 kubenswrapper[7845]: I0223 13:04:44.882054 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock" (OuterVolumeSpecName: "var-lock") pod "1860bead-61b8-4678-b583-c13c79575ef4" (UID: "1860bead-61b8-4678-b583-c13c79575ef4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:04:44.882179 master-0 kubenswrapper[7845]: I0223 13:04:44.882112 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1860bead-61b8-4678-b583-c13c79575ef4" (UID: "1860bead-61b8-4678-b583-c13c79575ef4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:04:44.885272 master-0 kubenswrapper[7845]: I0223 13:04:44.883717 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1860bead-61b8-4678-b583-c13c79575ef4" (UID: "1860bead-61b8-4678-b583-c13c79575ef4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:04:44.981007 master-0 kubenswrapper[7845]: I0223 13:04:44.980930 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access\") pod \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " Feb 23 13:04:44.981234 master-0 kubenswrapper[7845]: I0223 13:04:44.981056 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock\") pod \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " Feb 23 13:04:44.981234 master-0 kubenswrapper[7845]: I0223 13:04:44.981110 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir\") pod \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\" (UID: \"2d8a9026-ee0a-44c4-9c90-cd863f5461dd\") " Feb 23 13:04:44.981234 master-0 kubenswrapper[7845]: I0223 13:04:44.981139 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock" (OuterVolumeSpecName: "var-lock") pod "2d8a9026-ee0a-44c4-9c90-cd863f5461dd" (UID: "2d8a9026-ee0a-44c4-9c90-cd863f5461dd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:04:44.981413 master-0 kubenswrapper[7845]: I0223 13:04:44.981307 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d8a9026-ee0a-44c4-9c90-cd863f5461dd" (UID: "2d8a9026-ee0a-44c4-9c90-cd863f5461dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:04:44.981627 master-0 kubenswrapper[7845]: I0223 13:04:44.981596 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:44.981690 master-0 kubenswrapper[7845]: I0223 13:04:44.981649 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:44.981690 master-0 kubenswrapper[7845]: I0223 13:04:44.981672 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:44.981690 master-0 kubenswrapper[7845]: I0223 13:04:44.981689 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1860bead-61b8-4678-b583-c13c79575ef4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:44.981816 master-0 kubenswrapper[7845]: I0223 13:04:44.981707 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1860bead-61b8-4678-b583-c13c79575ef4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:44.984177 master-0 kubenswrapper[7845]: I0223 13:04:44.984132 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d8a9026-ee0a-44c4-9c90-cd863f5461dd" (UID: "2d8a9026-ee0a-44c4-9c90-cd863f5461dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:04:45.083132 master-0 kubenswrapper[7845]: I0223 13:04:45.082972 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d8a9026-ee0a-44c4-9c90-cd863f5461dd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:04:45.319971 master-0 kubenswrapper[7845]: I0223 13:04:45.319908 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/0.log" Feb 23 13:04:45.322380 master-0 kubenswrapper[7845]: I0223 13:04:45.322330 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2d8a9026-ee0a-44c4-9c90-cd863f5461dd/installer/0.log" Feb 23 13:04:45.322505 master-0 kubenswrapper[7845]: I0223 13:04:45.322472 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:04:45.330750 master-0 kubenswrapper[7845]: I0223 13:04:45.330689 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1860bead-61b8-4678-b583-c13c79575ef4/installer/0.log" Feb 23 13:04:45.330888 master-0 kubenswrapper[7845]: I0223 13:04:45.330860 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:04:45.348117 master-0 kubenswrapper[7845]: I0223 13:04:45.348043 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-gswst_dcd03d6e-4c8c-400a-8001-343aaeeca93b/ingress-operator/0.log" Feb 23 13:04:45.351635 master-0 kubenswrapper[7845]: I0223 13:04:45.351594 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/1.log" Feb 23 13:04:45.359476 master-0 kubenswrapper[7845]: I0223 13:04:45.359413 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/0.log" Feb 23 13:04:45.373952 master-0 kubenswrapper[7845]: I0223 13:04:45.373781 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/0.log" Feb 23 13:04:45.377210 master-0 kubenswrapper[7845]: I0223 13:04:45.377075 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/0.log" Feb 23 13:04:45.381425 master-0 kubenswrapper[7845]: I0223 13:04:45.381307 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/0.log" Feb 23 13:04:47.774070 master-0 kubenswrapper[7845]: E0223 13:04:47.773869 7845 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-operator-7f8c75f984-82h6s.1896e1c7daa05352 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-operator-7f8c75f984-82h6s,UID:c33f208a-e158-47e2-83d5-ac792bf3a1d5,APIVersion:v1,ResourceVersion:9547,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"auth-proxy-config\" : configmap \"kube-rbac-proxy\" not found,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:31.899550546 +0000 UTC m=+85.895281417,LastTimestamp:2026-02-23 13:02:31.899550546 +0000 UTC m=+85.895281417,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:04:48.862876 master-0 kubenswrapper[7845]: E0223 13:04:48.862583 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:04:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6\\\"],\\\"sizeBytes\\\":456273550},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6\\\"],\\\"sizeBytes\\\":455311777},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:51.501233 master-0 kubenswrapper[7845]: E0223 13:04:51.501116 7845 projected.go:194] Error preparing data for projected volume kube-api-access-kpbtg for pod openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:04:51.502217 master-0 kubenswrapper[7845]: E0223 13:04:51.501357 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:04:55.501223287 +0000 UTC m=+229.496954348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kpbtg" (UniqueName: "kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:04:51.913292 master-0 kubenswrapper[7845]: E0223 13:04:51.913027 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:04:55.538499 master-0 kubenswrapper[7845]: I0223 13:04:55.538366 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:04:57.267186 master-0 kubenswrapper[7845]: E0223 13:04:57.267083 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 23 13:04:58.864130 master-0 kubenswrapper[7845]: E0223 13:04:58.864060 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:04:59.522813 master-0 kubenswrapper[7845]: I0223 13:04:59.522706 7845 generic.go:334] "Generic (PLEG): container finished" podID="b4c51b25-f013-4f5c-acbd-598350468192" containerID="c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405" exitCode=0 Feb 23 13:05:06.285227 master-0 kubenswrapper[7845]: I0223 13:05:06.285102 7845 scope.go:117] "RemoveContainer" containerID="b2243c1b0e1a884637ce32ff21a340a8fd2d151e689c0ac21c3f49c0279d57f8" Feb 23 13:05:06.309044 master-0 kubenswrapper[7845]: I0223 13:05:06.309012 7845 scope.go:117] "RemoveContainer" containerID="b58d0f68f1bce11a0ca3232dc9f5a8f1bbd2f9babb595ae60e80f32714fa923e" Feb 23 13:05:07.573189 master-0 kubenswrapper[7845]: I0223 13:05:07.573136 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/1.log" Feb 23 13:05:07.573761 master-0 kubenswrapper[7845]: I0223 13:05:07.573610 7845 generic.go:334] "Generic (PLEG): container finished" podID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerID="548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367" exitCode=255 Feb 23 13:05:08.582824 master-0 kubenswrapper[7845]: I0223 13:05:08.582662 7845 generic.go:334] "Generic (PLEG): container finished" podID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerID="cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb" exitCode=0 Feb 23 13:05:08.865229 master-0 kubenswrapper[7845]: E0223 13:05:08.864982 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:08.914926 master-0 kubenswrapper[7845]: E0223 13:05:08.914835 7845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:05:10.633059 master-0 kubenswrapper[7845]: I0223 13:05:10.632967 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:10.633998 master-0 kubenswrapper[7845]: I0223 13:05:10.633094 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:10.633998 master-0 kubenswrapper[7845]: I0223 13:05:10.633150 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:10.633998 master-0 kubenswrapper[7845]: I0223 13:05:10.633436 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:12.613262 master-0 kubenswrapper[7845]: I0223 13:05:12.613190 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/0.log" Feb 23 13:05:12.614542 master-0 kubenswrapper[7845]: I0223 13:05:12.613320 7845 generic.go:334] "Generic (PLEG): container finished" podID="4bc22782-a369-48aa-a0e8-c1c63ffa3053" containerID="0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703" exitCode=1 Feb 23 13:05:15.637587 master-0 kubenswrapper[7845]: I0223 13:05:15.637500 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/1.log" Feb 23 13:05:15.639180 master-0 kubenswrapper[7845]: I0223 13:05:15.639115 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/0.log" Feb 23 13:05:15.639311 master-0 kubenswrapper[7845]: I0223 13:05:15.639197 7845 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045" exitCode=1 Feb 23 13:05:17.650934 master-0 kubenswrapper[7845]: I0223 13:05:17.650813 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-798b897698-j6dvg_21c55fd9-96b6-4dbb-9c26-a499a76cb259/machine-approver-controller/0.log" Feb 23 13:05:17.651557 master-0 kubenswrapper[7845]: I0223 13:05:17.651305 7845 generic.go:334] "Generic (PLEG): container finished" podID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" exitCode=255 Feb 23 13:05:18.272292 master-0 kubenswrapper[7845]: E0223 13:05:18.272168 7845 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 23 13:05:18.272609 master-0 kubenswrapper[7845]: E0223 13:05:18.272546 7845 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 23 13:05:18.272726 master-0 kubenswrapper[7845]: I0223 13:05:18.272613 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:05:18.272726 master-0 kubenswrapper[7845]: I0223 13:05:18.272670 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:18.284316 master-0 kubenswrapper[7845]: I0223 13:05:18.284222 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:05:18.865620 master-0 kubenswrapper[7845]: E0223 13:05:18.865579 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:20.632813 master-0 kubenswrapper[7845]: I0223 13:05:20.632724 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:20.632813 master-0 kubenswrapper[7845]: I0223 13:05:20.632798 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:20.633773 master-0 kubenswrapper[7845]: I0223 13:05:20.632814 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:20.633773 master-0 kubenswrapper[7845]: I0223 13:05:20.632902 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:21.777911 master-0 kubenswrapper[7845]: E0223 13:05:21.777725 7845 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-operator-7f8c75f984-82h6s.1896e1c7daa05352 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-operator-7f8c75f984-82h6s,UID:c33f208a-e158-47e2-83d5-ac792bf3a1d5,APIVersion:v1,ResourceVersion:9547,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"auth-proxy-config\" : configmap \"kube-rbac-proxy\" not found,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:02:31.899550546 +0000 UTC m=+85.895281417,LastTimestamp:2026-02-23 13:02:32.405975965 +0000 UTC m=+86.401706836,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:05:28.486708 master-0 kubenswrapper[7845]: I0223 13:05:28.486550 7845 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:28.867599 master-0 kubenswrapper[7845]: E0223 13:05:28.867400 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Feb 23 13:05:28.867599 master-0 kubenswrapper[7845]: E0223 13:05:28.867458 7845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:05:29.542185 master-0 kubenswrapper[7845]: E0223 13:05:29.541993 7845 projected.go:194] Error preparing data for projected volume kube-api-access-kpbtg for pod openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:05:29.542185 master-0 kubenswrapper[7845]: E0223 13:05:29.542143 7845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:05:37.542105284 +0000 UTC m=+271.537836185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-kpbtg" (UniqueName: "kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 23 13:05:30.633139 master-0 kubenswrapper[7845]: I0223 13:05:30.633015 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:30.633139 master-0 kubenswrapper[7845]: I0223 13:05:30.633039 7845 patch_prober.go:28] interesting pod/controller-manager-59947b7887-xg2ln container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 23 13:05:30.634216 master-0 kubenswrapper[7845]: I0223 13:05:30.633234 7845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:30.634216 master-0 kubenswrapper[7845]: I0223 13:05:30.633118 7845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 23 13:05:31.784491 master-0 kubenswrapper[7845]: I0223 13:05:31.784386 7845 status_manager.go:851] "Failed to get status for pod" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" pod="openshift-kube-controller-manager/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 23 13:05:37.574830 master-0 kubenswrapper[7845]: I0223 13:05:37.574753 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:05:40.314543 master-0 kubenswrapper[7845]: I0223 13:05:40.314438 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:05:40.315210 master-0 kubenswrapper[7845]: E0223 13:05:40.314962 7845 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.042s" Feb 23 13:05:40.315210 master-0 kubenswrapper[7845]: I0223 13:05:40.315014 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1860bead-61b8-4678-b583-c13c79575ef4","Type":"ContainerDied","Data":"923861d3e14f9f1ed180c6fc4f602226ba1fa39cb2d6ada3746794e2192c190f"} Feb 23 13:05:40.315210 master-0 kubenswrapper[7845]: I0223 13:05:40.315069 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:05:40.315210 master-0 kubenswrapper[7845]: I0223 13:05:40.315107 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:40.329907 master-0 kubenswrapper[7845]: I0223 13:05:40.329788 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podStartSLOduration=188.503122156 podStartE2EDuration="3m11.329759134s" podCreationTimestamp="2026-02-23 13:02:29 +0000 UTC" firstStartedPulling="2026-02-23 13:02:30.69727588 +0000 UTC m=+84.693006751" lastFinishedPulling="2026-02-23 13:02:33.523912818 +0000 UTC m=+87.519643729" observedRunningTime="2026-02-23 13:05:40.314473936 +0000 UTC m=+274.310204867" watchObservedRunningTime="2026-02-23 13:05:40.329759134 +0000 UTC m=+274.325490035" Feb 23 13:05:40.339301 master-0 kubenswrapper[7845]: I0223 13:05:40.337676 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" podStartSLOduration=189.144220061 podStartE2EDuration="3m12.337644714s" podCreationTimestamp="2026-02-23 13:02:28 +0000 UTC" firstStartedPulling="2026-02-23 13:02:30.324114057 +0000 UTC m=+84.319844938" lastFinishedPulling="2026-02-23 13:02:33.51753869 +0000 UTC m=+87.513269591" observedRunningTime="2026-02-23 13:05:40.336509035 +0000 UTC m=+274.332239976" watchObservedRunningTime="2026-02-23 13:05:40.337644714 +0000 UTC m=+274.333375625" Feb 23 13:05:40.340455 master-0 kubenswrapper[7845]: I0223 13:05:40.339942 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.345947 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346006 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346031 7845 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="03d08333-3260-4c10-b64e-9cc5416b3da0" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346069 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm"] Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346102 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346123 7845 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="03d08333-3260-4c10-b64e-9cc5416b3da0" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346149 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346173 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346201 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346231 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346295 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346325 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346413 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346440 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346464 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346491 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346542 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346566 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerDied","Data":"cde99f61030d5fde6382d6afa69998ae8c2f31acfb6e6f4017c5ade4d9e4754a"} Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346617 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:05:40.347437 master-0 kubenswrapper[7845]: I0223 13:05:40.346646 7845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.346671 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerDied","Data":"f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.347436 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362495 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerDied","Data":"bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362552 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362580 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362598 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362625 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerDied","Data":"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362642 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" event={"ID":"0a80d5ac-27ce-4ba9-809e-28c86b80163b","Type":"ContainerDied","Data":"1c78631b268af69806ac6e44c535cf690809e56173b2809b3ab9b30ce469dd12"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362658 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerDied","Data":"8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362674 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerDied","Data":"c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362688 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362698 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.347458 7845 scope.go:117] "RemoveContainer" containerID="548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362702 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.362979 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2d8a9026-ee0a-44c4-9c90-cd863f5461dd","Type":"ContainerDied","Data":"76debd76d1c83d2501b62235b0e22ba16bdbcca50bf40d8506d768b4e775ec89"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363004 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"611039cddaab573cdf7f17e37d453d213099869d69ffbabcba17a4b655a9aee4"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363021 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerDied","Data":"b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363036 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" event={"ID":"dcd03d6e-4c8c-400a-8001-343aaeeca93b","Type":"ContainerDied","Data":"d573c3e0e8ebb6202d8c5ebe9e0d85b859c5927b89cbdd3a205e10371f242b28"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363050 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363065 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerDied","Data":"611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363080 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerDied","Data":"49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363093 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363108 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363117 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2d8a9026-ee0a-44c4-9c90-cd863f5461dd","Type":"ContainerDied","Data":"a88facd6cceb823d7867c66655ebb82fc519bdd5794630121e38248005478c94"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363129 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88facd6cceb823d7867c66655ebb82fc519bdd5794630121e38248005478c94" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363138 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1860bead-61b8-4678-b583-c13c79575ef4","Type":"ContainerDied","Data":"d55c80b452ec57080fce8905969e2a9fba190533481c5ba5b0159b45e85104dd"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363148 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55c80b452ec57080fce8905969e2a9fba190533481c5ba5b0159b45e85104dd" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363159 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"dfd86a94ccff1eeb13e1ddaabeeeb38c3d4bc54e7d5689b425d76ab48acf7562"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363171 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" event={"ID":"dcd03d6e-4c8c-400a-8001-343aaeeca93b","Type":"ContainerStarted","Data":"cfa0d87799396810f28fecb1db2d2995af8c1e625bda9bdf2ef89a91efe10c77"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363183 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"1d00be7013db5f4871f8f9fcca38d13b794aeb731da6878ede81daa395d911d9"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363193 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerStarted","Data":"4272a362a8ac66f27c39149ee8833cfb7199e96eefc438602afcb38577af4828"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363203 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerStarted","Data":"b3ddf54bf6f19c8296e0175ded46bf9b3d3f12dbbe1d4cee2713a7180fbe826e"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363216 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerStarted","Data":"00e189fb9a66fa8bfe8c8ab05aa3a818d35a806659732011b60d32cd72335a4c"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363227 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerStarted","Data":"7ae02e0df64340d5796187bee35b0a226bdb253a9ea0b0f2d5eec150f3a915b5"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363258 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" event={"ID":"24dab1bc-cf56-429b-93ce-911970c41b5c","Type":"ContainerStarted","Data":"1de43e2fe732c243b299bbf868094d97161f5311abb12214cb33c8e468269941"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363271 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerStarted","Data":"9a0997d75615489d4d91525d520b1f48b044636546aee09415313e7b839573b0"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363281 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerStarted","Data":"851d34e72cd075433d8cf4b69dc2fdf69944f4b7cdd7245de32f6eacad0a08da"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363294 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.347521 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363307 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" event={"ID":"0a80d5ac-27ce-4ba9-809e-28c86b80163b","Type":"ContainerStarted","Data":"c9ddc1a2cc51a7e5f148d418473bbc98fa1d4f5f8982eb1a143851093791dd61"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363322 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerStarted","Data":"2232814e0e6f0bab57129339d23cb902f8963539e1dee1b616d27df4af9358d9"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363337 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363350 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363361 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363372 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363382 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363393 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerDied","Data":"c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363410 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerDied","Data":"548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363421 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerDied","Data":"cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363433 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerDied","Data":"0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363451 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363461 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerDied","Data":"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17"} Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363437 7845 scope.go:117] "RemoveContainer" containerID="cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb" Feb 23 13:05:40.365272 master-0 kubenswrapper[7845]: I0223 13:05:40.363634 7845 scope.go:117] "RemoveContainer" containerID="c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c" Feb 23 13:05:40.368549 master-0 kubenswrapper[7845]: I0223 13:05:40.365680 7845 scope.go:117] "RemoveContainer" containerID="0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703" Feb 23 13:05:40.368549 master-0 kubenswrapper[7845]: I0223 13:05:40.365734 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:05:40.368549 master-0 kubenswrapper[7845]: I0223 13:05:40.366106 7845 scope.go:117] "RemoveContainer" containerID="b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045" Feb 23 13:05:40.368549 master-0 kubenswrapper[7845]: I0223 13:05:40.366411 7845 scope.go:117] "RemoveContainer" containerID="c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405" Feb 23 13:05:40.368549 master-0 kubenswrapper[7845]: I0223 13:05:40.366970 7845 scope.go:117] "RemoveContainer" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" Feb 23 13:05:40.447711 master-0 kubenswrapper[7845]: I0223 13:05:40.444943 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" podStartSLOduration=188.015295123 podStartE2EDuration="3m10.444927386s" podCreationTimestamp="2026-02-23 13:02:30 +0000 UTC" firstStartedPulling="2026-02-23 13:02:31.841462666 +0000 UTC m=+85.837193537" lastFinishedPulling="2026-02-23 13:02:34.271094939 +0000 UTC m=+88.266825800" observedRunningTime="2026-02-23 13:05:40.425226916 +0000 UTC m=+274.420957787" watchObservedRunningTime="2026-02-23 13:05:40.444927386 +0000 UTC m=+274.440658257" Feb 23 13:05:40.460796 master-0 kubenswrapper[7845]: I0223 13:05:40.460709 7845 scope.go:117] "RemoveContainer" containerID="f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62" Feb 23 13:05:40.587321 master-0 kubenswrapper[7845]: I0223 13:05:40.581686 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:05:40.614383 master-0 kubenswrapper[7845]: I0223 13:05:40.606914 7845 scope.go:117] "RemoveContainer" containerID="d3e83b689409ffab35b6bf3a0343f41dbacbec334285a8d86cf53a0625ccbea7" Feb 23 13:05:40.614383 master-0 kubenswrapper[7845]: I0223 13:05:40.609402 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 23 13:05:40.637384 master-0 kubenswrapper[7845]: I0223 13:05:40.637334 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:05:40.768934 master-0 kubenswrapper[7845]: I0223 13:05:40.768837 7845 scope.go:117] "RemoveContainer" containerID="9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6" Feb 23 13:05:40.800421 master-0 kubenswrapper[7845]: I0223 13:05:40.798193 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-xltpx"] Feb 23 13:05:40.813510 master-0 kubenswrapper[7845]: I0223 13:05:40.813327 7845 scope.go:117] "RemoveContainer" containerID="c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c" Feb 23 13:05:40.826956 master-0 kubenswrapper[7845]: E0223 13:05:40.826901 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c\": container with ID starting with c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c not found: ID does not exist" containerID="c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c" Feb 23 13:05:40.827118 master-0 kubenswrapper[7845]: I0223 13:05:40.826968 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c"} err="failed to get container status \"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c\": rpc error: code = NotFound desc = could not find container \"c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c\": container with ID starting with c65806bbb72797b16ca1cc7fb12f55df7a4437f40a45f61de78d10a426366d4c not found: ID does not exist" Feb 23 13:05:40.827118 master-0 kubenswrapper[7845]: I0223 13:05:40.827002 7845 scope.go:117] "RemoveContainer" containerID="f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62" Feb 23 13:05:40.833054 master-0 kubenswrapper[7845]: E0223 13:05:40.832961 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62\": container with ID starting with f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62 not found: ID does not exist" containerID="f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62" Feb 23 13:05:40.833054 master-0 kubenswrapper[7845]: I0223 13:05:40.833010 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62"} err="failed to get container status \"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62\": rpc error: code = NotFound desc = could not find container \"f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62\": container with ID starting with f851ec87a4036c52a57197cffc73e94324fe1b28d700748ce2cbe7e609946b62 not found: ID does not exist" Feb 23 13:05:40.833054 master-0 kubenswrapper[7845]: I0223 13:05:40.833045 7845 scope.go:117] "RemoveContainer" containerID="9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6" Feb 23 13:05:40.845526 master-0 kubenswrapper[7845]: E0223 13:05:40.845472 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6\": container with ID starting with 9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6 not found: ID does not exist" containerID="9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6" Feb 23 13:05:40.845594 master-0 kubenswrapper[7845]: I0223 13:05:40.845526 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6"} err="failed to get container status \"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6\": rpc error: code = NotFound desc = could not find container \"9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6\": container with ID starting with 9434b984208094abfa32d0434e0b6c07ffebc8320b7283d7504e2a0ebf047ea6 not found: ID does not exist" Feb 23 13:05:40.855782 master-0 kubenswrapper[7845]: I0223 13:05:40.855747 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerStarted","Data":"156ba0e4f441ce67c6a903cbeb763ed72ee61489eac14300f0897eae83857ad8"} Feb 23 13:05:40.856542 master-0 kubenswrapper[7845]: I0223 13:05:40.856513 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:05:40.856610 master-0 kubenswrapper[7845]: I0223 13:05:40.856591 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:05:40.864144 master-0 kubenswrapper[7845]: I0223 13:05:40.864108 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:05:40.872733 master-0 kubenswrapper[7845]: I0223 13:05:40.872665 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" event={"ID":"d91fa6bb-0c88-4930-884a-67e840d58a9f","Type":"ContainerStarted","Data":"530274c66856c01402fb09a7e42bcd33e3db7cfc133bfc2b3d1f3161af696264"} Feb 23 13:05:40.873052 master-0 kubenswrapper[7845]: I0223 13:05:40.872801 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" event={"ID":"d91fa6bb-0c88-4930-884a-67e840d58a9f","Type":"ContainerStarted","Data":"0602a01933c19c27331c4869229405bde10812971f78fe4544f70f84182ff9cb"} Feb 23 13:05:40.876789 master-0 kubenswrapper[7845]: I0223 13:05:40.876759 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/1.log" Feb 23 13:05:40.881204 master-0 kubenswrapper[7845]: I0223 13:05:40.880108 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/1.log" Feb 23 13:05:40.881204 master-0 kubenswrapper[7845]: I0223 13:05:40.880190 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"28759b105ef16fc9766c38f67df6c142da73e18661733246b760f77ad371c2c7"} Feb 23 13:05:40.884328 master-0 kubenswrapper[7845]: I0223 13:05:40.884291 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/1.log" Feb 23 13:05:40.885446 master-0 kubenswrapper[7845]: I0223 13:05:40.885216 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:05:40.885446 master-0 kubenswrapper[7845]: I0223 13:05:40.885270 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:05:40.889488 master-0 kubenswrapper[7845]: I0223 13:05:40.889451 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:05:40.891190 master-0 kubenswrapper[7845]: I0223 13:05:40.891157 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:05:40.917332 master-0 kubenswrapper[7845]: I0223 13:05:40.917282 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859"] Feb 23 13:05:41.084143 master-0 kubenswrapper[7845]: I0223 13:05:41.084000 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" podStartSLOduration=186.414161789 podStartE2EDuration="3m14.083984141s" podCreationTimestamp="2026-02-23 13:02:27 +0000 UTC" firstStartedPulling="2026-02-23 13:02:28.940701123 +0000 UTC m=+82.936431994" lastFinishedPulling="2026-02-23 13:02:36.610523475 +0000 UTC m=+90.606254346" observedRunningTime="2026-02-23 13:05:41.082679047 +0000 UTC m=+275.078409918" watchObservedRunningTime="2026-02-23 13:05:41.083984141 +0000 UTC m=+275.079715012" Feb 23 13:05:41.169444 master-0 kubenswrapper[7845]: I0223 13:05:41.169368 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" podStartSLOduration=190.169331446 podStartE2EDuration="3m10.169331446s" podCreationTimestamp="2026-02-23 13:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:41.167441628 +0000 UTC m=+275.163172509" watchObservedRunningTime="2026-02-23 13:05:41.169331446 +0000 UTC m=+275.165062317" Feb 23 13:05:41.622297 master-0 kubenswrapper[7845]: I0223 13:05:41.622144 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:05:41.626290 master-0 kubenswrapper[7845]: I0223 13:05:41.626231 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:05:41.896571 master-0 kubenswrapper[7845]: I0223 13:05:41.896415 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerStarted","Data":"95e4d714a5b0e16564b86ea287bf522f1be8abd96b5a27e8ec1dc65852f2bbda"} Feb 23 13:05:41.900803 master-0 kubenswrapper[7845]: I0223 13:05:41.900768 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-798b897698-j6dvg_21c55fd9-96b6-4dbb-9c26-a499a76cb259/machine-approver-controller/0.log" Feb 23 13:05:41.901226 master-0 kubenswrapper[7845]: I0223 13:05:41.901191 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerStarted","Data":"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11"} Feb 23 13:05:41.902534 master-0 kubenswrapper[7845]: I0223 13:05:41.902505 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerStarted","Data":"92134e9eac995bc624b7c976d7f3c271d22473d1a0968a654d73191099e3ca2d"} Feb 23 13:05:41.903827 master-0 kubenswrapper[7845]: I0223 13:05:41.903797 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/0.log" Feb 23 13:05:41.903870 master-0 kubenswrapper[7845]: I0223 13:05:41.903854 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerStarted","Data":"9e38aa42b3fe61c9c1cf925b3c085230297f114549a309d0dbbb04d8b9cb3c23"} Feb 23 13:05:41.905995 master-0 kubenswrapper[7845]: I0223 13:05:41.905964 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/1.log" Feb 23 13:05:41.906056 master-0 kubenswrapper[7845]: I0223 13:05:41.906024 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"fdf69ec24e1c6086e49f484fb8b8dd94cca3653e3ce3d1c63357917cb9333952"} Feb 23 13:05:41.908063 master-0 kubenswrapper[7845]: I0223 13:05:41.908019 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" event={"ID":"70ccda5f-ca1a-4fce-b77f-a1132f85635a","Type":"ContainerStarted","Data":"aaa06fef5e54a39c410b76a0809563d32afa3bde2278654961bb3dcb6c8acd54"} Feb 23 13:05:42.212676 master-0 kubenswrapper[7845]: I0223 13:05:42.212618 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" path="/var/lib/kubelet/pods/a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef/volumes" Feb 23 13:05:44.267696 master-0 kubenswrapper[7845]: E0223 13:05:44.267640 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:44.832255 master-0 kubenswrapper[7845]: I0223 13:05:44.832163 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:44.832540 master-0 kubenswrapper[7845]: I0223 13:05:44.832314 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:44.863923 master-0 kubenswrapper[7845]: I0223 13:05:44.863864 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:44.931112 master-0 kubenswrapper[7845]: I0223 13:05:44.931037 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" event={"ID":"70ccda5f-ca1a-4fce-b77f-a1132f85635a","Type":"ContainerStarted","Data":"0d5783f70ff80e76a1a48b27c5b987ff45424a40320a0bbd6848ff62584c3675"} Feb 23 13:05:44.933780 master-0 kubenswrapper[7845]: I0223 13:05:44.933730 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerStarted","Data":"2a82c81816ea58ba55512744c24143ddbc2f5aefd0d2aef524a9297835676cb3"} Feb 23 13:05:44.936909 master-0 kubenswrapper[7845]: I0223 13:05:44.936864 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:05:44.938126 master-0 kubenswrapper[7845]: I0223 13:05:44.938094 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/0.log" Feb 23 13:05:44.938168 master-0 kubenswrapper[7845]: I0223 13:05:44.938139 7845 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46" exitCode=1 Feb 23 13:05:44.938884 master-0 kubenswrapper[7845]: I0223 13:05:44.938850 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46"} Feb 23 13:05:44.938925 master-0 kubenswrapper[7845]: I0223 13:05:44.938893 7845 scope.go:117] "RemoveContainer" containerID="bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df" Feb 23 13:05:44.939354 master-0 kubenswrapper[7845]: I0223 13:05:44.939330 7845 scope.go:117] "RemoveContainer" containerID="65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46" Feb 23 13:05:44.939581 master-0 kubenswrapper[7845]: E0223 13:05:44.939547 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:05:44.959443 master-0 kubenswrapper[7845]: I0223 13:05:44.959397 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:44.964293 master-0 kubenswrapper[7845]: I0223 13:05:44.964092 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-xltpx" podStartSLOduration=191.01921587 podStartE2EDuration="3m13.964069649s" podCreationTimestamp="2026-02-23 13:02:31 +0000 UTC" firstStartedPulling="2026-02-23 13:05:40.853231206 +0000 UTC m=+274.848962077" lastFinishedPulling="2026-02-23 13:05:43.798084985 +0000 UTC m=+277.793815856" observedRunningTime="2026-02-23 13:05:44.961835103 +0000 UTC m=+278.957565974" watchObservedRunningTime="2026-02-23 13:05:44.964069649 +0000 UTC m=+278.959800550" Feb 23 13:05:45.017025 master-0 kubenswrapper[7845]: I0223 13:05:45.016822 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" podStartSLOduration=191.134377543 podStartE2EDuration="3m14.016792157s" podCreationTimestamp="2026-02-23 13:02:31 +0000 UTC" firstStartedPulling="2026-02-23 13:05:40.936435267 +0000 UTC m=+274.932166138" lastFinishedPulling="2026-02-23 13:05:43.818849881 +0000 UTC m=+277.814580752" observedRunningTime="2026-02-23 13:05:45.014273013 +0000 UTC m=+279.010003984" watchObservedRunningTime="2026-02-23 13:05:45.016792157 +0000 UTC m=+279.012523068" Feb 23 13:05:45.954780 master-0 kubenswrapper[7845]: I0223 13:05:45.954640 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:05:49.043622 master-0 kubenswrapper[7845]: E0223 13:05:49.043328 7845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:05:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:05:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:05:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:05:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6\\\"],\\\"sizeBytes\\\":456273550},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6\\\"],\\\"sizeBytes\\\":455311777},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:05:51.494383 master-0 kubenswrapper[7845]: I0223 13:05:51.494306 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg"] Feb 23 13:05:51.495870 master-0 kubenswrapper[7845]: I0223 13:05:51.494603 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="kube-rbac-proxy" containerID="cri-o://5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" gracePeriod=30 Feb 23 13:05:51.495870 master-0 kubenswrapper[7845]: I0223 13:05:51.494751 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" containerID="cri-o://38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" gracePeriod=30 Feb 23 13:05:51.653739 master-0 kubenswrapper[7845]: I0223 13:05:51.653696 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-798b897698-j6dvg_21c55fd9-96b6-4dbb-9c26-a499a76cb259/machine-approver-controller/0.log" Feb 23 13:05:51.654165 master-0 kubenswrapper[7845]: I0223 13:05:51.654136 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:05:51.702091 master-0 kubenswrapper[7845]: I0223 13:05:51.702035 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config\") pod \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " Feb 23 13:05:51.702344 master-0 kubenswrapper[7845]: I0223 13:05:51.702180 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsnpf\" (UniqueName: \"kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf\") pod \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " Feb 23 13:05:51.702344 master-0 kubenswrapper[7845]: I0223 13:05:51.702233 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") pod \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " Feb 23 13:05:51.702344 master-0 kubenswrapper[7845]: I0223 13:05:51.702294 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls\") pod \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\" (UID: \"21c55fd9-96b6-4dbb-9c26-a499a76cb259\") " Feb 23 13:05:51.702625 master-0 kubenswrapper[7845]: I0223 13:05:51.702585 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "21c55fd9-96b6-4dbb-9c26-a499a76cb259" (UID: "21c55fd9-96b6-4dbb-9c26-a499a76cb259"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:51.702798 master-0 kubenswrapper[7845]: I0223 13:05:51.702752 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config" (OuterVolumeSpecName: "config") pod "21c55fd9-96b6-4dbb-9c26-a499a76cb259" (UID: "21c55fd9-96b6-4dbb-9c26-a499a76cb259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:05:51.705577 master-0 kubenswrapper[7845]: I0223 13:05:51.705537 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "21c55fd9-96b6-4dbb-9c26-a499a76cb259" (UID: "21c55fd9-96b6-4dbb-9c26-a499a76cb259"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:05:51.706202 master-0 kubenswrapper[7845]: I0223 13:05:51.706147 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf" (OuterVolumeSpecName: "kube-api-access-wsnpf") pod "21c55fd9-96b6-4dbb-9c26-a499a76cb259" (UID: "21c55fd9-96b6-4dbb-9c26-a499a76cb259"). InnerVolumeSpecName "kube-api-access-wsnpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:05:51.803401 master-0 kubenswrapper[7845]: I0223 13:05:51.803263 7845 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:51.803401 master-0 kubenswrapper[7845]: I0223 13:05:51.803317 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsnpf\" (UniqueName: \"kubernetes.io/projected/21c55fd9-96b6-4dbb-9c26-a499a76cb259-kube-api-access-wsnpf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:51.803401 master-0 kubenswrapper[7845]: I0223 13:05:51.803332 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21c55fd9-96b6-4dbb-9c26-a499a76cb259-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:51.803401 master-0 kubenswrapper[7845]: I0223 13:05:51.803345 7845 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21c55fd9-96b6-4dbb-9c26-a499a76cb259-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:05:51.996604 master-0 kubenswrapper[7845]: I0223 13:05:51.996560 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-798b897698-j6dvg_21c55fd9-96b6-4dbb-9c26-a499a76cb259/machine-approver-controller/0.log" Feb 23 13:05:51.997446 master-0 kubenswrapper[7845]: I0223 13:05:51.997376 7845 generic.go:334] "Generic (PLEG): container finished" podID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerID="38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" exitCode=0 Feb 23 13:05:51.997446 master-0 kubenswrapper[7845]: I0223 13:05:51.997438 7845 generic.go:334] "Generic (PLEG): container finished" podID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerID="5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" exitCode=0 Feb 23 13:05:51.997573 master-0 kubenswrapper[7845]: I0223 13:05:51.997458 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" Feb 23 13:05:51.997573 master-0 kubenswrapper[7845]: I0223 13:05:51.997463 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerDied","Data":"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11"} Feb 23 13:05:51.997573 master-0 kubenswrapper[7845]: I0223 13:05:51.997541 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerDied","Data":"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1"} Feb 23 13:05:51.997573 master-0 kubenswrapper[7845]: I0223 13:05:51.997561 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg" event={"ID":"21c55fd9-96b6-4dbb-9c26-a499a76cb259","Type":"ContainerDied","Data":"0c69dec4a845a27a998ea351ea64ca562e17d952ed5877d2399e163463006b53"} Feb 23 13:05:51.997847 master-0 kubenswrapper[7845]: I0223 13:05:51.997586 7845 scope.go:117] "RemoveContainer" containerID="38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" Feb 23 13:05:52.019625 master-0 kubenswrapper[7845]: I0223 13:05:52.019570 7845 scope.go:117] "RemoveContainer" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" Feb 23 13:05:52.035550 master-0 kubenswrapper[7845]: I0223 13:05:52.035491 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg"] Feb 23 13:05:52.040683 master-0 kubenswrapper[7845]: I0223 13:05:52.040647 7845 scope.go:117] "RemoveContainer" containerID="5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" Feb 23 13:05:52.041551 master-0 kubenswrapper[7845]: I0223 13:05:52.041500 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-j6dvg"] Feb 23 13:05:52.057592 master-0 kubenswrapper[7845]: I0223 13:05:52.057510 7845 scope.go:117] "RemoveContainer" containerID="38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" Feb 23 13:05:52.065024 master-0 kubenswrapper[7845]: E0223 13:05:52.060851 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11\": container with ID starting with 38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11 not found: ID does not exist" containerID="38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" Feb 23 13:05:52.065024 master-0 kubenswrapper[7845]: I0223 13:05:52.060895 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11"} err="failed to get container status \"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11\": rpc error: code = NotFound desc = could not find container \"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11\": container with ID starting with 38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11 not found: ID does not exist" Feb 23 13:05:52.065024 master-0 kubenswrapper[7845]: I0223 13:05:52.060927 7845 scope.go:117] "RemoveContainer" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" Feb 23 13:05:52.065887 master-0 kubenswrapper[7845]: E0223 13:05:52.065831 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17\": container with ID starting with f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17 not found: ID does not exist" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" Feb 23 13:05:52.065963 master-0 kubenswrapper[7845]: I0223 13:05:52.065898 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17"} err="failed to get container status \"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17\": rpc error: code = NotFound desc = could not find container \"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17\": container with ID starting with f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17 not found: ID does not exist" Feb 23 13:05:52.065963 master-0 kubenswrapper[7845]: I0223 13:05:52.065942 7845 scope.go:117] "RemoveContainer" containerID="5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" Feb 23 13:05:52.066450 master-0 kubenswrapper[7845]: E0223 13:05:52.066414 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1\": container with ID starting with 5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1 not found: ID does not exist" containerID="5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" Feb 23 13:05:52.066520 master-0 kubenswrapper[7845]: I0223 13:05:52.066449 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1"} err="failed to get container status \"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1\": rpc error: code = NotFound desc = could not find container \"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1\": container with ID starting with 5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1 not found: ID does not exist" Feb 23 13:05:52.066520 master-0 kubenswrapper[7845]: I0223 13:05:52.066473 7845 scope.go:117] "RemoveContainer" containerID="38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11" Feb 23 13:05:52.066776 master-0 kubenswrapper[7845]: I0223 13:05:52.066739 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11"} err="failed to get container status \"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11\": rpc error: code = NotFound desc = could not find container \"38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11\": container with ID starting with 38c4e280e1f5ef2d8b8ea6dc914f9ff457c428dfa40d773747ea73eea575eb11 not found: ID does not exist" Feb 23 13:05:52.066776 master-0 kubenswrapper[7845]: I0223 13:05:52.066769 7845 scope.go:117] "RemoveContainer" containerID="f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17" Feb 23 13:05:52.067101 master-0 kubenswrapper[7845]: I0223 13:05:52.067069 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17"} err="failed to get container status \"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17\": rpc error: code = NotFound desc = could not find container \"f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17\": container with ID starting with f45582d713ba7f5a3231dd4806d3bed2ec2d09709585cfd4e8763db70defaa17 not found: ID does not exist" Feb 23 13:05:52.067101 master-0 kubenswrapper[7845]: I0223 13:05:52.067097 7845 scope.go:117] "RemoveContainer" containerID="5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1" Feb 23 13:05:52.067371 master-0 kubenswrapper[7845]: I0223 13:05:52.067341 7845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1"} err="failed to get container status \"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1\": rpc error: code = NotFound desc = could not find container \"5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1\": container with ID starting with 5eaa42027dfe743f7060d78b14a41ed77e6a1ffe6e69302eaea8dbd8e960ded1 not found: ID does not exist" Feb 23 13:05:52.210541 master-0 kubenswrapper[7845]: I0223 13:05:52.210488 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" path="/var/lib/kubelet/pods/21c55fd9-96b6-4dbb-9c26-a499a76cb259/volumes" Feb 23 13:05:54.303440 master-0 kubenswrapper[7845]: I0223 13:05:54.303313 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/1.log" Feb 23 13:05:54.499399 master-0 kubenswrapper[7845]: I0223 13:05:54.499173 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/2.log" Feb 23 13:05:54.900518 master-0 kubenswrapper[7845]: I0223 13:05:54.900378 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5ddfd84bb7-vhg7p_c0520301-1a6b-49ca-acca-011692d5b784/fix-audit-permissions/0.log" Feb 23 13:05:55.104726 master-0 kubenswrapper[7845]: I0223 13:05:55.104669 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5ddfd84bb7-vhg7p_c0520301-1a6b-49ca-acca-011692d5b784/oauth-apiserver/0.log" Feb 23 13:05:55.300515 master-0 kubenswrapper[7845]: I0223 13:05:55.300471 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-6b92p_3d85c030-4931-42d7-afd6-72b41789aea8/kube-rbac-proxy/0.log" Feb 23 13:05:55.501272 master-0 kubenswrapper[7845]: I0223 13:05:55.501201 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-6b92p_3d85c030-4931-42d7-afd6-72b41789aea8/cluster-autoscaler-operator/0.log" Feb 23 13:05:55.738006 master-0 kubenswrapper[7845]: E0223 13:05:55.737896 7845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df\": container with ID starting with bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df not found: ID does not exist" containerID="bf33ebd3a7c944a8b2b4f5b2612fb746b9e2aa4db28f34044a8146fe08ba01df" Feb 23 13:05:56.097106 master-0 kubenswrapper[7845]: I0223 13:05:56.096966 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:05:56.207929 master-0 kubenswrapper[7845]: I0223 13:05:56.207884 7845 scope.go:117] "RemoveContainer" containerID="65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46" Feb 23 13:05:56.306271 master-0 kubenswrapper[7845]: I0223 13:05:56.306181 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/baremetal-kube-rbac-proxy/0.log" Feb 23 13:05:56.499263 master-0 kubenswrapper[7845]: I0223 13:05:56.499200 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/0.log" Feb 23 13:05:56.699963 master-0 kubenswrapper[7845]: I0223 13:05:56.699869 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/1.log" Feb 23 13:05:56.733446 master-0 kubenswrapper[7845]: I0223 13:05:56.733366 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:05:56.733899 master-0 kubenswrapper[7845]: E0223 13:05:56.733852 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:05:56.733998 master-0 kubenswrapper[7845]: I0223 13:05:56.733903 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:05:56.733998 master-0 kubenswrapper[7845]: E0223 13:05:56.733952 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:05:56.733998 master-0 kubenswrapper[7845]: I0223 13:05:56.733971 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:05:56.733998 master-0 kubenswrapper[7845]: E0223 13:05:56.733993 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: I0223 13:05:56.734014 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: E0223 13:05:56.734039 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="kube-rbac-proxy" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: I0223 13:05:56.734056 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="kube-rbac-proxy" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: E0223 13:05:56.734082 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: I0223 13:05:56.734100 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: E0223 13:05:56.734125 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: I0223 13:05:56.734142 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: E0223 13:05:56.734207 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" containerName="installer" Feb 23 13:05:56.734233 master-0 kubenswrapper[7845]: I0223 13:05:56.734226 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: E0223 13:05:56.734296 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734318 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734563 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90f4174-e4ec-4f8c-bf2f-c7fb8803ccef" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734605 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734630 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734650 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734675 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="machine-approver-controller" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734699 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c55fd9-96b6-4dbb-9c26-a499a76cb259" containerName="kube-rbac-proxy" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734728 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:05:56.734768 master-0 kubenswrapper[7845]: I0223 13:05:56.734747 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:05:56.736782 master-0 kubenswrapper[7845]: I0223 13:05:56.736734 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.740429 master-0 kubenswrapper[7845]: I0223 13:05:56.740368 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:05:56.740429 master-0 kubenswrapper[7845]: I0223 13:05:56.740378 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:05:56.743394 master-0 kubenswrapper[7845]: I0223 13:05:56.743342 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-t58wm" Feb 23 13:05:56.748767 master-0 kubenswrapper[7845]: I0223 13:05:56.748689 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf"] Feb 23 13:05:56.751092 master-0 kubenswrapper[7845]: I0223 13:05:56.751043 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.754074 master-0 kubenswrapper[7845]: I0223 13:05:56.754017 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:05:56.754208 master-0 kubenswrapper[7845]: I0223 13:05:56.754111 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4dmq5" Feb 23 13:05:56.758956 master-0 kubenswrapper[7845]: I0223 13:05:56.758898 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:05:56.758956 master-0 kubenswrapper[7845]: I0223 13:05:56.758951 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:05:56.759205 master-0 kubenswrapper[7845]: I0223 13:05:56.758899 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:05:56.761869 master-0 kubenswrapper[7845]: I0223 13:05:56.761803 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf"] Feb 23 13:05:56.763743 master-0 kubenswrapper[7845]: I0223 13:05:56.763691 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.764531 master-0 kubenswrapper[7845]: I0223 13:05:56.764478 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:05:56.767971 master-0 kubenswrapper[7845]: I0223 13:05:56.767908 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.768182 master-0 kubenswrapper[7845]: I0223 13:05:56.767981 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.768182 master-0 kubenswrapper[7845]: I0223 13:05:56.768098 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.768357 master-0 kubenswrapper[7845]: I0223 13:05:56.768199 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lptv\" (UniqueName: \"kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.768357 master-0 kubenswrapper[7845]: I0223 13:05:56.768271 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.768357 master-0 kubenswrapper[7845]: I0223 13:05:56.768347 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.768564 master-0 kubenswrapper[7845]: I0223 13:05:56.768383 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.768564 master-0 kubenswrapper[7845]: I0223 13:05:56.768431 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfrhg"] Feb 23 13:05:56.768564 master-0 kubenswrapper[7845]: I0223 13:05:56.768469 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.768735 master-0 kubenswrapper[7845]: I0223 13:05:56.768595 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:05:56.770008 master-0 kubenswrapper[7845]: I0223 13:05:56.769967 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.775693 master-0 kubenswrapper[7845]: I0223 13:05:56.775625 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:05:56.777836 master-0 kubenswrapper[7845]: I0223 13:05:56.777785 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:05:56.778435 master-0 kubenswrapper[7845]: I0223 13:05:56.778392 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:05:56.778558 master-0 kubenswrapper[7845]: I0223 13:05:56.778435 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r" Feb 23 13:05:56.778653 master-0 kubenswrapper[7845]: I0223 13:05:56.778625 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:05:56.779793 master-0 kubenswrapper[7845]: I0223 13:05:56.779748 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx"] Feb 23 13:05:56.781129 master-0 kubenswrapper[7845]: I0223 13:05:56.781094 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.781475 master-0 kubenswrapper[7845]: I0223 13:05:56.781441 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-vhrrg" Feb 23 13:05:56.781559 master-0 kubenswrapper[7845]: I0223 13:05:56.781516 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:05:56.784684 master-0 kubenswrapper[7845]: I0223 13:05:56.784632 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz"] Feb 23 13:05:56.784749 master-0 kubenswrapper[7845]: I0223 13:05:56.784692 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:05:56.786008 master-0 kubenswrapper[7845]: I0223 13:05:56.785968 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.790119 master-0 kubenswrapper[7845]: I0223 13:05:56.790066 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:05:56.790526 master-0 kubenswrapper[7845]: I0223 13:05:56.790487 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-n9dxs" Feb 23 13:05:56.791696 master-0 kubenswrapper[7845]: I0223 13:05:56.790715 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:05:56.791696 master-0 kubenswrapper[7845]: I0223 13:05:56.791010 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:05:56.791970 master-0 kubenswrapper[7845]: I0223 13:05:56.791922 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mldw4"] Feb 23 13:05:56.800346 master-0 kubenswrapper[7845]: I0223 13:05:56.793775 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.800346 master-0 kubenswrapper[7845]: I0223 13:05:56.797080 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2628k" Feb 23 13:05:56.809328 master-0 kubenswrapper[7845]: I0223 13:05:56.805323 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mldw4"] Feb 23 13:05:56.813319 master-0 kubenswrapper[7845]: I0223 13:05:56.812853 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz"] Feb 23 13:05:56.818604 master-0 kubenswrapper[7845]: I0223 13:05:56.818115 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx"] Feb 23 13:05:56.833230 master-0 kubenswrapper[7845]: I0223 13:05:56.831809 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfrhg"] Feb 23 13:05:56.864655 master-0 kubenswrapper[7845]: I0223 13:05:56.864525 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.864500156 podStartE2EDuration="864.500156ms" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:56.859898309 +0000 UTC m=+290.855629180" watchObservedRunningTime="2026-02-23 13:05:56.864500156 +0000 UTC m=+290.860231037" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.869799 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.869875 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gm8\" (UniqueName: \"kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.869909 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.869941 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzwh\" (UniqueName: \"kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.869972 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2s4f\" (UniqueName: \"kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870005 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6fp\" (UniqueName: \"kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870048 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870080 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870121 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870157 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870188 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.870267 master-0 kubenswrapper[7845]: I0223 13:05:56.870224 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.870429 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871152 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871195 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lptv\" (UniqueName: \"kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871339 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871609 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871656 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871686 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871714 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871746 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871818 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871854 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871943 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.871896 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.872062 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.872084 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.872292 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts56d\" (UniqueName: \"kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.872335 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.872407 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.880625 master-0 kubenswrapper[7845]: I0223 13:05:56.873113 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.882965 master-0 kubenswrapper[7845]: I0223 13:05:56.882886 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.888525 master-0 kubenswrapper[7845]: I0223 13:05:56.888459 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:56.897034 master-0 kubenswrapper[7845]: I0223 13:05:56.897009 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lptv\" (UniqueName: \"kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:56.910766 master-0 kubenswrapper[7845]: I0223 13:05:56.910709 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/0.log" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973709 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973784 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973806 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973831 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973853 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts56d\" (UniqueName: \"kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.973881 master-0 kubenswrapper[7845]: I0223 13:05:56.973871 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973891 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973910 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24gm8\" (UniqueName: \"kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973923 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973938 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzwh\" (UniqueName: \"kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973956 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2s4f\" (UniqueName: \"kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973976 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l6fp\" (UniqueName: \"kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.973993 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.974016 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.974038 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.974055 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.974076 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.974195 master-0 kubenswrapper[7845]: I0223 13:05:56.974095 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.974990 master-0 kubenswrapper[7845]: I0223 13:05:56.974961 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.975151 master-0 kubenswrapper[7845]: I0223 13:05:56.975105 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.975530 master-0 kubenswrapper[7845]: I0223 13:05:56.975489 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.976044 master-0 kubenswrapper[7845]: I0223 13:05:56.976007 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.976343 master-0 kubenswrapper[7845]: I0223 13:05:56.976311 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.976728 master-0 kubenswrapper[7845]: I0223 13:05:56.976689 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.976789 master-0 kubenswrapper[7845]: I0223 13:05:56.976774 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.979461 master-0 kubenswrapper[7845]: I0223 13:05:56.977866 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.979461 master-0 kubenswrapper[7845]: I0223 13:05:56.978961 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.980086 master-0 kubenswrapper[7845]: I0223 13:05:56.980051 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.980529 master-0 kubenswrapper[7845]: I0223 13:05:56.980507 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.985533 master-0 kubenswrapper[7845]: I0223 13:05:56.985490 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.989376 master-0 kubenswrapper[7845]: I0223 13:05:56.989208 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2s4f\" (UniqueName: \"kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:56.991763 master-0 kubenswrapper[7845]: I0223 13:05:56.991723 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzwh\" (UniqueName: \"kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:56.994298 master-0 kubenswrapper[7845]: I0223 13:05:56.993860 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:56.994595 master-0 kubenswrapper[7845]: I0223 13:05:56.994564 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts56d\" (UniqueName: \"kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:56.994727 master-0 kubenswrapper[7845]: I0223 13:05:56.994700 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24gm8\" (UniqueName: \"kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:56.996066 master-0 kubenswrapper[7845]: I0223 13:05:56.996031 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l6fp\" (UniqueName: \"kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:57.030080 master-0 kubenswrapper[7845]: I0223 13:05:57.030022 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:05:57.030679 master-0 kubenswrapper[7845]: I0223 13:05:57.030640 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"aab74ca70685126f8898c1a27065ea70c7d1d230ea4b10b604c9d038a279487c"} Feb 23 13:05:57.043161 master-0 kubenswrapper[7845]: E0223 13:05:57.043124 7845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:05:57.070320 master-0 kubenswrapper[7845]: I0223 13:05:57.070267 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:05:57.099280 master-0 kubenswrapper[7845]: I0223 13:05:57.099223 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/1.log" Feb 23 13:05:57.103773 master-0 kubenswrapper[7845]: I0223 13:05:57.103738 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:05:57.119323 master-0 kubenswrapper[7845]: I0223 13:05:57.119101 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:05:57.126177 master-0 kubenswrapper[7845]: W0223 13:05:57.126125 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfac71a3d_cfbb_49d2_9a5c_c3ed714a933e.slice/crio-b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af WatchSource:0}: Error finding container b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af: Status 404 returned error can't find the container with id b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af Feb 23 13:05:57.146357 master-0 kubenswrapper[7845]: W0223 13:05:57.146231 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod430cb782_18d5_4429_99ef_29d3dca0d803.slice/crio-c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0 WatchSource:0}: Error finding container c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0: Status 404 returned error can't find the container with id c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0 Feb 23 13:05:57.156669 master-0 kubenswrapper[7845]: I0223 13:05:57.156627 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:05:57.176209 master-0 kubenswrapper[7845]: I0223 13:05:57.176170 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:57.237171 master-0 kubenswrapper[7845]: I0223 13:05:57.237026 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:05:57.256470 master-0 kubenswrapper[7845]: I0223 13:05:57.256419 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:05:57.306592 master-0 kubenswrapper[7845]: I0223 13:05:57.303856 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_05bbed42-d2a0-4d6c-a25f-0d75a37dbab0/installer/0.log" Feb 23 13:05:57.343638 master-0 kubenswrapper[7845]: I0223 13:05:57.342793 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:05:57.345535 master-0 kubenswrapper[7845]: I0223 13:05:57.345473 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.351293 master-0 kubenswrapper[7845]: I0223 13:05:57.348052 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-977zq" Feb 23 13:05:57.356198 master-0 kubenswrapper[7845]: I0223 13:05:57.355583 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:05:57.386719 master-0 kubenswrapper[7845]: I0223 13:05:57.386648 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhh8\" (UniqueName: \"kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.386978 master-0 kubenswrapper[7845]: I0223 13:05:57.386827 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.386978 master-0 kubenswrapper[7845]: I0223 13:05:57.386871 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.488124 master-0 kubenswrapper[7845]: I0223 13:05:57.488016 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.488124 master-0 kubenswrapper[7845]: I0223 13:05:57.488067 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.488124 master-0 kubenswrapper[7845]: I0223 13:05:57.488120 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhh8\" (UniqueName: \"kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.488855 master-0 kubenswrapper[7845]: I0223 13:05:57.488819 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.489086 master-0 kubenswrapper[7845]: I0223 13:05:57.489052 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.501557 master-0 kubenswrapper[7845]: I0223 13:05:57.501516 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-dgldn_4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/kube-apiserver-operator/0.log" Feb 23 13:05:57.508824 master-0 kubenswrapper[7845]: I0223 13:05:57.508787 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhh8\" (UniqueName: \"kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8\") pod \"redhat-marketplace-ccxr7\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.519232 master-0 kubenswrapper[7845]: I0223 13:05:57.517235 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 23 13:05:57.541330 master-0 kubenswrapper[7845]: I0223 13:05:57.541232 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:05:57.542392 master-0 kubenswrapper[7845]: I0223 13:05:57.542314 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.545155 master-0 kubenswrapper[7845]: I0223 13:05:57.545107 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lp4jk" Feb 23 13:05:57.552419 master-0 kubenswrapper[7845]: I0223 13:05:57.552086 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:05:57.590092 master-0 kubenswrapper[7845]: I0223 13:05:57.590034 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.590092 master-0 kubenswrapper[7845]: I0223 13:05:57.590074 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7fg\" (UniqueName: \"kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.590092 master-0 kubenswrapper[7845]: I0223 13:05:57.590122 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.620184 master-0 kubenswrapper[7845]: I0223 13:05:57.620111 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfrhg"] Feb 23 13:05:57.624543 master-0 kubenswrapper[7845]: W0223 13:05:57.624499 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0128982b_01b4_49cb_ab4a_8759b844c86b.slice/crio-3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f WatchSource:0}: Error finding container 3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f: Status 404 returned error can't find the container with id 3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f Feb 23 13:05:57.673171 master-0 kubenswrapper[7845]: I0223 13:05:57.673102 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:05:57.691816 master-0 kubenswrapper[7845]: I0223 13:05:57.691422 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.691816 master-0 kubenswrapper[7845]: I0223 13:05:57.691474 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7fg\" (UniqueName: \"kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.692587 master-0 kubenswrapper[7845]: I0223 13:05:57.691631 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.692587 master-0 kubenswrapper[7845]: I0223 13:05:57.692013 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.692587 master-0 kubenswrapper[7845]: I0223 13:05:57.692325 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.699239 master-0 kubenswrapper[7845]: I0223 13:05:57.699155 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx"] Feb 23 13:05:57.716205 master-0 kubenswrapper[7845]: I0223 13:05:57.709812 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-dgldn_4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/kube-apiserver-operator/1.log" Feb 23 13:05:57.722016 master-0 kubenswrapper[7845]: I0223 13:05:57.721980 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7fg\" (UniqueName: \"kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg\") pod \"redhat-operators-shjn2\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.777521 master-0 kubenswrapper[7845]: I0223 13:05:57.777480 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mldw4"] Feb 23 13:05:57.812296 master-0 kubenswrapper[7845]: I0223 13:05:57.812220 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz"] Feb 23 13:05:57.871673 master-0 kubenswrapper[7845]: I0223 13:05:57.871620 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:05:57.896329 master-0 kubenswrapper[7845]: I0223 13:05:57.896174 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 23 13:05:58.061026 master-0 kubenswrapper[7845]: I0223 13:05:58.060608 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mldw4" event={"ID":"29908b4a-0df5-4c46-b886-c968976c25fb","Type":"ContainerStarted","Data":"c34c0686c926bdae121a0eedb681349d3da6cf0bf3d0236efb47c671f55f2bfa"} Feb 23 13:05:58.066941 master-0 kubenswrapper[7845]: I0223 13:05:58.062825 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerStarted","Data":"b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af"} Feb 23 13:05:58.066941 master-0 kubenswrapper[7845]: I0223 13:05:58.065566 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ce5fa293-4526-4dd9-a0e4-a1db7d667092","Type":"ContainerStarted","Data":"19aea6b0c64c2242c1162a5644f9c7d995fa9caa7710602094da7d8d77b66e03"} Feb 23 13:05:58.066941 master-0 kubenswrapper[7845]: I0223 13:05:58.065621 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ce5fa293-4526-4dd9-a0e4-a1db7d667092","Type":"ContainerStarted","Data":"843d775bbad7c7fe41df23fb96ec59c3909440741cf205f5eb1b07a6fc2a50c5"} Feb 23 13:05:58.067614 master-0 kubenswrapper[7845]: I0223 13:05:58.067398 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" event={"ID":"54411ade-3383-48aa-ba10-62ffb40185b9","Type":"ContainerStarted","Data":"2e4fb291843e2a32fa702ce16cd7bd36b76c9baa1b908d899a1fea0027970ec2"} Feb 23 13:05:58.067614 master-0 kubenswrapper[7845]: I0223 13:05:58.067431 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" event={"ID":"54411ade-3383-48aa-ba10-62ffb40185b9","Type":"ContainerStarted","Data":"45f23e7a0d31d2c3d126aa0253e052ced5690e8352ab68bf6cd5ecb2feb526ad"} Feb 23 13:05:58.067614 master-0 kubenswrapper[7845]: I0223 13:05:58.067572 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:58.070044 master-0 kubenswrapper[7845]: I0223 13:05:58.069986 7845 generic.go:334] "Generic (PLEG): container finished" podID="0128982b-01b4-49cb-ab4a-8759b844c86b" containerID="724a8df1a9b3d2adc3e5862fae8386b6be43fcc540a79a07de74b8360f4c034d" exitCode=0 Feb 23 13:05:58.070099 master-0 kubenswrapper[7845]: I0223 13:05:58.070044 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrhg" event={"ID":"0128982b-01b4-49cb-ab4a-8759b844c86b","Type":"ContainerDied","Data":"724a8df1a9b3d2adc3e5862fae8386b6be43fcc540a79a07de74b8360f4c034d"} Feb 23 13:05:58.070099 master-0 kubenswrapper[7845]: I0223 13:05:58.070085 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrhg" event={"ID":"0128982b-01b4-49cb-ab4a-8759b844c86b","Type":"ContainerStarted","Data":"3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f"} Feb 23 13:05:58.071011 master-0 kubenswrapper[7845]: I0223 13:05:58.070990 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" event={"ID":"8db940c1-82ba-4b6e-8137-059e26ab1ced","Type":"ContainerStarted","Data":"49a6b189f8fbf9c0aa7bb66aa47a22331a8f42d58ff77972bbb9f47a339fc2a5"} Feb 23 13:05:58.073717 master-0 kubenswrapper[7845]: I0223 13:05:58.073355 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" event={"ID":"430cb782-18d5-4429-99ef-29d3dca0d803","Type":"ContainerStarted","Data":"09c37fb183628456535e9d994f19979ed54eaad90335c36b799938ed6f869ef3"} Feb 23 13:05:58.073717 master-0 kubenswrapper[7845]: I0223 13:05:58.073385 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" event={"ID":"430cb782-18d5-4429-99ef-29d3dca0d803","Type":"ContainerStarted","Data":"db4d7e91b2342b6ae16dbd20882faf69f23aa624d101c5b6916aac5e61e38394"} Feb 23 13:05:58.073717 master-0 kubenswrapper[7845]: I0223 13:05:58.073399 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" event={"ID":"430cb782-18d5-4429-99ef-29d3dca0d803","Type":"ContainerStarted","Data":"c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0"} Feb 23 13:05:58.090709 master-0 kubenswrapper[7845]: I0223 13:05:58.090643 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.090615926 podStartE2EDuration="2.090615926s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:58.089327374 +0000 UTC m=+292.085058265" watchObservedRunningTime="2026-02-23 13:05:58.090615926 +0000 UTC m=+292.086346807" Feb 23 13:05:58.115968 master-0 kubenswrapper[7845]: I0223 13:05:58.115897 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" podStartSLOduration=2.115871367 podStartE2EDuration="2.115871367s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:58.10417604 +0000 UTC m=+292.099906921" watchObservedRunningTime="2026-02-23 13:05:58.115871367 +0000 UTC m=+292.111602228" Feb 23 13:05:58.120040 master-0 kubenswrapper[7845]: I0223 13:05:58.120008 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 23 13:05:58.140074 master-0 kubenswrapper[7845]: I0223 13:05:58.138865 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:05:58.156506 master-0 kubenswrapper[7845]: W0223 13:05:58.155863 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d40e8ca_222b_4e41_b1c9_86291193147a.slice/crio-00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33 WatchSource:0}: Error finding container 00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33: Status 404 returned error can't find the container with id 00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33 Feb 23 13:05:58.161181 master-0 kubenswrapper[7845]: I0223 13:05:58.160864 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" podStartSLOduration=2.160837288 podStartE2EDuration="2.160837288s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:05:58.151386148 +0000 UTC m=+292.147117039" watchObservedRunningTime="2026-02-23 13:05:58.160837288 +0000 UTC m=+292.156568159" Feb 23 13:05:58.171324 master-0 kubenswrapper[7845]: I0223 13:05:58.171277 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:05:58.179611 master-0 kubenswrapper[7845]: W0223 13:05:58.179555 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a87dfd2_d2d6_4359_96f8_bf01a5d7b9a4.slice/crio-ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5 WatchSource:0}: Error finding container ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5: Status 404 returned error can't find the container with id ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5 Feb 23 13:05:58.296394 master-0 kubenswrapper[7845]: I0223 13:05:58.296279 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 23 13:05:58.327285 master-0 kubenswrapper[7845]: I0223 13:05:58.326168 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:05:58.500722 master-0 kubenswrapper[7845]: I0223 13:05:58.500642 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_04a14e09-67c1-45e9-af34-bccb2fe3757e/installer/0.log" Feb 23 13:05:58.701132 master-0 kubenswrapper[7845]: I0223 13:05:58.701083 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2d8a9026-ee0a-44c4-9c90-cd863f5461dd/installer/0.log" Feb 23 13:05:58.901901 master-0 kubenswrapper[7845]: I0223 13:05:58.901852 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-jpf5n_b1970ec8-620e-4529-bf3b-1cf9a52c27d3/kube-controller-manager-operator/0.log" Feb 23 13:05:59.083789 master-0 kubenswrapper[7845]: I0223 13:05:59.083414 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mldw4" event={"ID":"29908b4a-0df5-4c46-b886-c968976c25fb","Type":"ContainerDied","Data":"95e946dd400ab3361a407271ad87765a76061201b898907bfd81d61a000c3f70"} Feb 23 13:05:59.084431 master-0 kubenswrapper[7845]: I0223 13:05:59.084382 7845 generic.go:334] "Generic (PLEG): container finished" podID="29908b4a-0df5-4c46-b886-c968976c25fb" containerID="95e946dd400ab3361a407271ad87765a76061201b898907bfd81d61a000c3f70" exitCode=0 Feb 23 13:05:59.088666 master-0 kubenswrapper[7845]: I0223 13:05:59.088639 7845 generic.go:334] "Generic (PLEG): container finished" podID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerID="d327710529f59d8c9da3bd6a73015ea11137381731e99ad4d928fa1511eb2b90" exitCode=0 Feb 23 13:05:59.088760 master-0 kubenswrapper[7845]: I0223 13:05:59.088713 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shjn2" event={"ID":"1d40e8ca-222b-4e41-b1c9-86291193147a","Type":"ContainerDied","Data":"d327710529f59d8c9da3bd6a73015ea11137381731e99ad4d928fa1511eb2b90"} Feb 23 13:05:59.088760 master-0 kubenswrapper[7845]: I0223 13:05:59.088745 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shjn2" event={"ID":"1d40e8ca-222b-4e41-b1c9-86291193147a","Type":"ContainerStarted","Data":"00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33"} Feb 23 13:05:59.091169 master-0 kubenswrapper[7845]: I0223 13:05:59.090708 7845 generic.go:334] "Generic (PLEG): container finished" podID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerID="568a3a11e000578b5ac04304482dc130dccde359b178556f465a305ccc23db65" exitCode=0 Feb 23 13:05:59.091169 master-0 kubenswrapper[7845]: I0223 13:05:59.090762 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccxr7" event={"ID":"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4","Type":"ContainerDied","Data":"568a3a11e000578b5ac04304482dc130dccde359b178556f465a305ccc23db65"} Feb 23 13:05:59.091169 master-0 kubenswrapper[7845]: I0223 13:05:59.090792 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccxr7" event={"ID":"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4","Type":"ContainerStarted","Data":"ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5"} Feb 23 13:05:59.093490 master-0 kubenswrapper[7845]: I0223 13:05:59.092988 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" event={"ID":"8db940c1-82ba-4b6e-8137-059e26ab1ced","Type":"ContainerStarted","Data":"48c2fd5c38d6372b04c89f1680fdf133e5d619a556733af074b36cac350744bf"} Feb 23 13:05:59.101635 master-0 kubenswrapper[7845]: I0223 13:05:59.101590 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-jpf5n_b1970ec8-620e-4529-bf3b-1cf9a52c27d3/kube-controller-manager-operator/1.log" Feb 23 13:05:59.300509 master-0 kubenswrapper[7845]: I0223 13:05:59.300414 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/2.log" Feb 23 13:05:59.707339 master-0 kubenswrapper[7845]: I0223 13:05:59.707258 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 23 13:05:59.908178 master-0 kubenswrapper[7845]: I0223 13:05:59.908108 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 23 13:06:00.108038 master-0 kubenswrapper[7845]: I0223 13:06:00.107883 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 23 13:06:00.303833 master-0 kubenswrapper[7845]: I0223 13:06:00.303391 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 23 13:06:00.507691 master-0 kubenswrapper[7845]: I0223 13:06:00.507618 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1860bead-61b8-4678-b583-c13c79575ef4/installer/0.log" Feb 23 13:06:00.710192 master-0 kubenswrapper[7845]: I0223 13:06:00.710134 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-j97h8_0a80d5ac-27ce-4ba9-809e-28c86b80163b/kube-scheduler-operator-container/0.log" Feb 23 13:06:00.899976 master-0 kubenswrapper[7845]: I0223 13:06:00.899912 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-j97h8_0a80d5ac-27ce-4ba9-809e-28c86b80163b/kube-scheduler-operator-container/1.log" Feb 23 13:06:01.025995 master-0 kubenswrapper[7845]: I0223 13:06:01.025941 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:06:01.026735 master-0 kubenswrapper[7845]: I0223 13:06:01.026704 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.029090 master-0 kubenswrapper[7845]: I0223 13:06:01.029053 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:06:01.029144 master-0 kubenswrapper[7845]: I0223 13:06:01.029108 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-q2chk" Feb 23 13:06:01.043705 master-0 kubenswrapper[7845]: I0223 13:06:01.043652 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:06:01.060021 master-0 kubenswrapper[7845]: I0223 13:06:01.059960 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.060021 master-0 kubenswrapper[7845]: I0223 13:06:01.060023 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.060276 master-0 kubenswrapper[7845]: I0223 13:06:01.060085 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.102568 master-0 kubenswrapper[7845]: I0223 13:06:01.102524 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-6wk86_ae1799b6-85b0-4aed-8835-35cb3d8d1109/openshift-apiserver-operator/0.log" Feb 23 13:06:01.114640 master-0 kubenswrapper[7845]: I0223 13:06:01.114590 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerStarted","Data":"bd090a6220a0b4f2f0fc9cb08565f482d9db23101596d845f689553fc6d7e220"} Feb 23 13:06:01.114640 master-0 kubenswrapper[7845]: I0223 13:06:01.114643 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerStarted","Data":"ea4ccade5d91f42ec6781f90655ef7759993042065379bed5ddb1cdfcc75c01c"} Feb 23 13:06:01.114640 master-0 kubenswrapper[7845]: I0223 13:06:01.114658 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerStarted","Data":"5f4cf3f364a6f327cfc81ba1663c5035ffdd90e6862495a257d3ee6e70e53f99"} Feb 23 13:06:01.161515 master-0 kubenswrapper[7845]: I0223 13:06:01.161452 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.161781 master-0 kubenswrapper[7845]: I0223 13:06:01.161555 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.161832 master-0 kubenswrapper[7845]: I0223 13:06:01.161766 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.162380 master-0 kubenswrapper[7845]: I0223 13:06:01.161902 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.162380 master-0 kubenswrapper[7845]: I0223 13:06:01.162302 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.179001 master-0 kubenswrapper[7845]: I0223 13:06:01.178935 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.298814 master-0 kubenswrapper[7845]: I0223 13:06:01.298471 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-6wk86_ae1799b6-85b0-4aed-8835-35cb3d8d1109/openshift-apiserver-operator/1.log" Feb 23 13:06:01.309572 master-0 kubenswrapper[7845]: I0223 13:06:01.309370 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" podStartSLOduration=2.19403845 podStartE2EDuration="5.309345364s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="2026-02-23 13:05:57.130849594 +0000 UTC m=+291.126580465" lastFinishedPulling="2026-02-23 13:06:00.246156508 +0000 UTC m=+294.241887379" observedRunningTime="2026-02-23 13:06:01.131744338 +0000 UTC m=+295.127475209" watchObservedRunningTime="2026-02-23 13:06:01.309345364 +0000 UTC m=+295.305076235" Feb 23 13:06:01.311766 master-0 kubenswrapper[7845]: I0223 13:06:01.311734 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:06:01.351992 master-0 kubenswrapper[7845]: I0223 13:06:01.351931 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:01.619498 master-0 kubenswrapper[7845]: I0223 13:06:01.619363 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6dcf85cb46-cmf75_c159d5f4-5c95-4600-80ec-a17a419cfd7a/fix-audit-permissions/0.log" Feb 23 13:06:01.703336 master-0 kubenswrapper[7845]: I0223 13:06:01.700617 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6dcf85cb46-cmf75_c159d5f4-5c95-4600-80ec-a17a419cfd7a/openshift-apiserver/0.log" Feb 23 13:06:01.726106 master-0 kubenswrapper[7845]: I0223 13:06:01.726034 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r8xxs"] Feb 23 13:06:01.727349 master-0 kubenswrapper[7845]: I0223 13:06:01.727294 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.738619 master-0 kubenswrapper[7845]: I0223 13:06:01.738390 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 23 13:06:01.743283 master-0 kubenswrapper[7845]: I0223 13:06:01.743211 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8xxs"] Feb 23 13:06:01.745823 master-0 kubenswrapper[7845]: W0223 13:06:01.745789 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc2e50127_3c2e_4514_ace5_2cf6f9223abf.slice/crio-835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06 WatchSource:0}: Error finding container 835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06: Status 404 returned error can't find the container with id 835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06 Feb 23 13:06:01.771294 master-0 kubenswrapper[7845]: I0223 13:06:01.771209 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65tqd\" (UniqueName: \"kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.771462 master-0 kubenswrapper[7845]: I0223 13:06:01.771391 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.771462 master-0 kubenswrapper[7845]: I0223 13:06:01.771441 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.872728 master-0 kubenswrapper[7845]: I0223 13:06:01.872603 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.872728 master-0 kubenswrapper[7845]: I0223 13:06:01.872665 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.872950 master-0 kubenswrapper[7845]: I0223 13:06:01.872877 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65tqd\" (UniqueName: \"kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.873626 master-0 kubenswrapper[7845]: I0223 13:06:01.873592 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.873680 master-0 kubenswrapper[7845]: I0223 13:06:01.873652 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.889497 master-0 kubenswrapper[7845]: I0223 13:06:01.889461 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65tqd\" (UniqueName: \"kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:01.897995 master-0 kubenswrapper[7845]: I0223 13:06:01.897933 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6dcf85cb46-cmf75_c159d5f4-5c95-4600-80ec-a17a419cfd7a/openshift-apiserver-check-endpoints/0.log" Feb 23 13:06:02.101099 master-0 kubenswrapper[7845]: I0223 13:06:02.100776 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:02.102167 master-0 kubenswrapper[7845]: I0223 13:06:02.101830 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/0.log" Feb 23 13:06:02.123476 master-0 kubenswrapper[7845]: I0223 13:06:02.123358 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"c2e50127-3c2e-4514-ace5-2cf6f9223abf","Type":"ContainerStarted","Data":"87320ceaa2976029b0853261379f23dc5fc274ad76d399f47415010358a9fd41"} Feb 23 13:06:02.123476 master-0 kubenswrapper[7845]: I0223 13:06:02.123407 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"c2e50127-3c2e-4514-ace5-2cf6f9223abf","Type":"ContainerStarted","Data":"835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06"} Feb 23 13:06:02.143536 master-0 kubenswrapper[7845]: I0223 13:06:02.143458 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.143441648 podStartE2EDuration="1.143441648s" podCreationTimestamp="2026-02-23 13:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:02.142547195 +0000 UTC m=+296.138278096" watchObservedRunningTime="2026-02-23 13:06:02.143441648 +0000 UTC m=+296.139172519" Feb 23 13:06:02.302374 master-0 kubenswrapper[7845]: I0223 13:06:02.302201 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/1.log" Feb 23 13:06:02.320878 master-0 kubenswrapper[7845]: I0223 13:06:02.320776 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:06:02.488674 master-0 kubenswrapper[7845]: I0223 13:06:02.488621 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8xxs"] Feb 23 13:06:02.495420 master-0 kubenswrapper[7845]: W0223 13:06:02.495308 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3f9dc5_d10d_452c_bf5d_c5830a444617.slice/crio-bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281 WatchSource:0}: Error finding container bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281: Status 404 returned error can't find the container with id bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281 Feb 23 13:06:02.501372 master-0 kubenswrapper[7845]: I0223 13:06:02.501330 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-mjhwm_d91fa6bb-0c88-4930-884a-67e840d58a9f/catalog-operator/0.log" Feb 23 13:06:02.986641 master-0 kubenswrapper[7845]: I0223 13:06:02.985386 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-g9x74_cbcca259-0dbf-48ca-bf90-eec638dcdd10/olm-operator/0.log" Feb 23 13:06:02.986641 master-0 kubenswrapper[7845]: I0223 13:06:02.986281 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bxqsd"] Feb 23 13:06:02.998123 master-0 kubenswrapper[7845]: I0223 13:06:02.998049 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.037170 master-0 kubenswrapper[7845]: I0223 13:06:03.037106 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxqsd"] Feb 23 13:06:03.093539 master-0 kubenswrapper[7845]: I0223 13:06:03.093453 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.093539 master-0 kubenswrapper[7845]: I0223 13:06:03.093529 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj957\" (UniqueName: \"kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.093771 master-0 kubenswrapper[7845]: I0223 13:06:03.093640 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.096641 master-0 kubenswrapper[7845]: I0223 13:06:03.096477 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tzms_da5d5997-e45f-4858-a9a9-e880bc222caf/kube-rbac-proxy/0.log" Feb 23 13:06:03.131808 master-0 kubenswrapper[7845]: I0223 13:06:03.131734 7845 generic.go:334] "Generic (PLEG): container finished" podID="9c3f9dc5-d10d-452c-bf5d-c5830a444617" containerID="575e8eb2d638c0aaa08f496c1356ae98d7c6f7469dbf105d6341ad7a0b64e752" exitCode=0 Feb 23 13:06:03.132577 master-0 kubenswrapper[7845]: I0223 13:06:03.132440 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8xxs" event={"ID":"9c3f9dc5-d10d-452c-bf5d-c5830a444617","Type":"ContainerDied","Data":"575e8eb2d638c0aaa08f496c1356ae98d7c6f7469dbf105d6341ad7a0b64e752"} Feb 23 13:06:03.132577 master-0 kubenswrapper[7845]: I0223 13:06:03.132465 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8xxs" event={"ID":"9c3f9dc5-d10d-452c-bf5d-c5830a444617","Type":"ContainerStarted","Data":"bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281"} Feb 23 13:06:03.195693 master-0 kubenswrapper[7845]: I0223 13:06:03.195638 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.195889 master-0 kubenswrapper[7845]: I0223 13:06:03.195732 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.195889 master-0 kubenswrapper[7845]: I0223 13:06:03.195764 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj957\" (UniqueName: \"kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.196183 master-0 kubenswrapper[7845]: I0223 13:06:03.196162 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.196512 master-0 kubenswrapper[7845]: I0223 13:06:03.196495 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.217091 master-0 kubenswrapper[7845]: I0223 13:06:03.216988 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj957\" (UniqueName: \"kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:03.305970 master-0 kubenswrapper[7845]: I0223 13:06:03.305891 7845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tzms_da5d5997-e45f-4858-a9a9-e880bc222caf/package-server-manager/0.log" Feb 23 13:06:03.393217 master-0 kubenswrapper[7845]: I0223 13:06:03.387461 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:04.066826 master-0 kubenswrapper[7845]: I0223 13:06:04.066602 7845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxqsd"] Feb 23 13:06:04.075838 master-0 kubenswrapper[7845]: W0223 13:06:04.075161 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48d5b87_189b_45b6_ba55_37bd22d59eb6.slice/crio-6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35 WatchSource:0}: Error finding container 6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35: Status 404 returned error can't find the container with id 6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35 Feb 23 13:06:04.156323 master-0 kubenswrapper[7845]: I0223 13:06:04.156115 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxqsd" event={"ID":"b48d5b87-189b-45b6-ba55-37bd22d59eb6","Type":"ContainerStarted","Data":"6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35"} Feb 23 13:06:05.164895 master-0 kubenswrapper[7845]: I0223 13:06:05.164592 7845 generic.go:334] "Generic (PLEG): container finished" podID="b48d5b87-189b-45b6-ba55-37bd22d59eb6" containerID="d2baf7def32d6ff8e0d60946c5533f6a35fc42b4bd00e227486661e9d86637b2" exitCode=0 Feb 23 13:06:05.164895 master-0 kubenswrapper[7845]: I0223 13:06:05.164656 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxqsd" event={"ID":"b48d5b87-189b-45b6-ba55-37bd22d59eb6","Type":"ContainerDied","Data":"d2baf7def32d6ff8e0d60946c5533f6a35fc42b4bd00e227486661e9d86637b2"} Feb 23 13:06:06.355630 master-0 kubenswrapper[7845]: I0223 13:06:06.355555 7845 scope.go:117] "RemoveContainer" containerID="db83ef82ac155acc22a9f418d8c50d6b04cf844595b5d8cd37f345df9398fd8f" Feb 23 13:06:15.482339 master-0 kubenswrapper[7845]: I0223 13:06:15.478233 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf"] Feb 23 13:06:15.482339 master-0 kubenswrapper[7845]: I0223 13:06:15.478709 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="cluster-cloud-controller-manager" containerID="cri-o://5f4cf3f364a6f327cfc81ba1663c5035ffdd90e6862495a257d3ee6e70e53f99" gracePeriod=30 Feb 23 13:06:15.482339 master-0 kubenswrapper[7845]: I0223 13:06:15.478748 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="kube-rbac-proxy" containerID="cri-o://bd090a6220a0b4f2f0fc9cb08565f482d9db23101596d845f689553fc6d7e220" gracePeriod=30 Feb 23 13:06:15.482339 master-0 kubenswrapper[7845]: I0223 13:06:15.478926 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="config-sync-controllers" containerID="cri-o://ea4ccade5d91f42ec6781f90655ef7759993042065379bed5ddb1cdfcc75c01c" gracePeriod=30 Feb 23 13:06:16.235361 master-0 kubenswrapper[7845]: I0223 13:06:16.235295 7845 generic.go:334] "Generic (PLEG): container finished" podID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerID="bd090a6220a0b4f2f0fc9cb08565f482d9db23101596d845f689553fc6d7e220" exitCode=0 Feb 23 13:06:16.235361 master-0 kubenswrapper[7845]: I0223 13:06:16.235336 7845 generic.go:334] "Generic (PLEG): container finished" podID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerID="ea4ccade5d91f42ec6781f90655ef7759993042065379bed5ddb1cdfcc75c01c" exitCode=0 Feb 23 13:06:16.235361 master-0 kubenswrapper[7845]: I0223 13:06:16.235346 7845 generic.go:334] "Generic (PLEG): container finished" podID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerID="5f4cf3f364a6f327cfc81ba1663c5035ffdd90e6862495a257d3ee6e70e53f99" exitCode=0 Feb 23 13:06:16.235361 master-0 kubenswrapper[7845]: I0223 13:06:16.235375 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerDied","Data":"bd090a6220a0b4f2f0fc9cb08565f482d9db23101596d845f689553fc6d7e220"} Feb 23 13:06:16.235816 master-0 kubenswrapper[7845]: I0223 13:06:16.235407 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerDied","Data":"ea4ccade5d91f42ec6781f90655ef7759993042065379bed5ddb1cdfcc75c01c"} Feb 23 13:06:16.235816 master-0 kubenswrapper[7845]: I0223 13:06:16.235421 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerDied","Data":"5f4cf3f364a6f327cfc81ba1663c5035ffdd90e6862495a257d3ee6e70e53f99"} Feb 23 13:06:24.214696 master-0 kubenswrapper[7845]: I0223 13:06:24.214657 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:06:24.287535 master-0 kubenswrapper[7845]: I0223 13:06:24.287491 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" event={"ID":"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e","Type":"ContainerDied","Data":"b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af"} Feb 23 13:06:24.287657 master-0 kubenswrapper[7845]: I0223 13:06:24.287552 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf" Feb 23 13:06:24.287657 master-0 kubenswrapper[7845]: I0223 13:06:24.287571 7845 scope.go:117] "RemoveContainer" containerID="bd090a6220a0b4f2f0fc9cb08565f482d9db23101596d845f689553fc6d7e220" Feb 23 13:06:24.315160 master-0 kubenswrapper[7845]: I0223 13:06:24.315112 7845 scope.go:117] "RemoveContainer" containerID="ea4ccade5d91f42ec6781f90655ef7759993042065379bed5ddb1cdfcc75c01c" Feb 23 13:06:24.323046 master-0 kubenswrapper[7845]: I0223 13:06:24.323013 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lptv\" (UniqueName: \"kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv\") pod \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " Feb 23 13:06:24.323248 master-0 kubenswrapper[7845]: I0223 13:06:24.323075 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images\") pod \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " Feb 23 13:06:24.323248 master-0 kubenswrapper[7845]: I0223 13:06:24.323114 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls\") pod \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " Feb 23 13:06:24.323248 master-0 kubenswrapper[7845]: I0223 13:06:24.323137 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube\") pod \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " Feb 23 13:06:24.323248 master-0 kubenswrapper[7845]: I0223 13:06:24.323168 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config\") pod \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\" (UID: \"fac71a3d-cfbb-49d2-9a5c-c3ed714a933e\") " Feb 23 13:06:24.323772 master-0 kubenswrapper[7845]: I0223 13:06:24.323744 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" (UID: "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:06:24.324291 master-0 kubenswrapper[7845]: I0223 13:06:24.324254 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" (UID: "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:24.324869 master-0 kubenswrapper[7845]: I0223 13:06:24.324822 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images" (OuterVolumeSpecName: "images") pod "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" (UID: "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:06:24.326591 master-0 kubenswrapper[7845]: I0223 13:06:24.326565 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv" (OuterVolumeSpecName: "kube-api-access-7lptv") pod "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" (UID: "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e"). InnerVolumeSpecName "kube-api-access-7lptv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:24.327150 master-0 kubenswrapper[7845]: I0223 13:06:24.327115 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" (UID: "fac71a3d-cfbb-49d2-9a5c-c3ed714a933e"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:06:24.366536 master-0 kubenswrapper[7845]: I0223 13:06:24.366498 7845 scope.go:117] "RemoveContainer" containerID="5f4cf3f364a6f327cfc81ba1663c5035ffdd90e6862495a257d3ee6e70e53f99" Feb 23 13:06:24.424656 master-0 kubenswrapper[7845]: I0223 13:06:24.424607 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lptv\" (UniqueName: \"kubernetes.io/projected/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-kube-api-access-7lptv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:24.424820 master-0 kubenswrapper[7845]: I0223 13:06:24.424645 7845 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-images\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:24.424820 master-0 kubenswrapper[7845]: I0223 13:06:24.424685 7845 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:24.424820 master-0 kubenswrapper[7845]: I0223 13:06:24.424699 7845 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:24.424820 master-0 kubenswrapper[7845]: I0223 13:06:24.424715 7845 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:24.919346 master-0 kubenswrapper[7845]: I0223 13:06:24.919274 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf"] Feb 23 13:06:24.932290 master-0 kubenswrapper[7845]: I0223 13:06:24.932192 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-t6gmf"] Feb 23 13:06:24.976969 master-0 kubenswrapper[7845]: I0223 13:06:24.976902 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f"] Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: E0223 13:06:24.977141 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="config-sync-controllers" Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: I0223 13:06:24.977155 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="config-sync-controllers" Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: E0223 13:06:24.977164 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="kube-rbac-proxy" Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: I0223 13:06:24.977170 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="kube-rbac-proxy" Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: E0223 13:06:24.977185 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="cluster-cloud-controller-manager" Feb 23 13:06:24.977271 master-0 kubenswrapper[7845]: I0223 13:06:24.977193 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="cluster-cloud-controller-manager" Feb 23 13:06:24.977545 master-0 kubenswrapper[7845]: I0223 13:06:24.977320 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="config-sync-controllers" Feb 23 13:06:24.977545 master-0 kubenswrapper[7845]: I0223 13:06:24.977332 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="cluster-cloud-controller-manager" Feb 23 13:06:24.977545 master-0 kubenswrapper[7845]: I0223 13:06:24.977342 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" containerName="kube-rbac-proxy" Feb 23 13:06:24.978111 master-0 kubenswrapper[7845]: I0223 13:06:24.978082 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:24.980116 master-0 kubenswrapper[7845]: I0223 13:06:24.980077 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4dmq5" Feb 23 13:06:24.980272 master-0 kubenswrapper[7845]: I0223 13:06:24.980077 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:06:24.980683 master-0 kubenswrapper[7845]: I0223 13:06:24.980654 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:06:24.981220 master-0 kubenswrapper[7845]: I0223 13:06:24.981194 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:06:24.981432 master-0 kubenswrapper[7845]: I0223 13:06:24.981402 7845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:06:24.981833 master-0 kubenswrapper[7845]: I0223 13:06:24.981807 7845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:06:25.035778 master-0 kubenswrapper[7845]: I0223 13:06:25.035718 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.036018 master-0 kubenswrapper[7845]: I0223 13:06:25.035794 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.036018 master-0 kubenswrapper[7845]: I0223 13:06:25.035838 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.036018 master-0 kubenswrapper[7845]: I0223 13:06:25.035955 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgsw\" (UniqueName: \"kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.036148 master-0 kubenswrapper[7845]: I0223 13:06:25.036040 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.137205 master-0 kubenswrapper[7845]: I0223 13:06:25.137046 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.137205 master-0 kubenswrapper[7845]: I0223 13:06:25.137129 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138151 master-0 kubenswrapper[7845]: I0223 13:06:25.137431 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138226 master-0 kubenswrapper[7845]: I0223 13:06:25.138204 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgsw\" (UniqueName: \"kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138342 master-0 kubenswrapper[7845]: I0223 13:06:25.138322 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138520 master-0 kubenswrapper[7845]: I0223 13:06:25.138493 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138555 master-0 kubenswrapper[7845]: I0223 13:06:25.138095 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.138788 master-0 kubenswrapper[7845]: I0223 13:06:25.138113 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.139776 master-0 kubenswrapper[7845]: I0223 13:06:25.139741 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.153569 master-0 kubenswrapper[7845]: I0223 13:06:25.153530 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgsw\" (UniqueName: \"kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.294514 master-0 kubenswrapper[7845]: I0223 13:06:25.294427 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:25.302920 master-0 kubenswrapper[7845]: I0223 13:06:25.302853 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" event={"ID":"8db940c1-82ba-4b6e-8137-059e26ab1ced","Type":"ContainerStarted","Data":"c10ab2ee9ebfa349f56fe76937a41bcc4073bbb1da67ba666a8653aa33c15175"} Feb 23 13:06:25.304842 master-0 kubenswrapper[7845]: I0223 13:06:25.304774 7845 generic.go:334] "Generic (PLEG): container finished" podID="9c3f9dc5-d10d-452c-bf5d-c5830a444617" containerID="d0de1e6343e6391d3758c50779d73db6f7290912532fe3316a0336e90448c6db" exitCode=0 Feb 23 13:06:25.304934 master-0 kubenswrapper[7845]: I0223 13:06:25.304859 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8xxs" event={"ID":"9c3f9dc5-d10d-452c-bf5d-c5830a444617","Type":"ContainerDied","Data":"d0de1e6343e6391d3758c50779d73db6f7290912532fe3316a0336e90448c6db"} Feb 23 13:06:25.314461 master-0 kubenswrapper[7845]: I0223 13:06:25.308907 7845 generic.go:334] "Generic (PLEG): container finished" podID="29908b4a-0df5-4c46-b886-c968976c25fb" containerID="f1f8754c5384bd933de1355ed0d4210b1fe7bc06bbbe4e8dc3bb20c9c6ae8499" exitCode=0 Feb 23 13:06:25.314461 master-0 kubenswrapper[7845]: I0223 13:06:25.309016 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mldw4" event={"ID":"29908b4a-0df5-4c46-b886-c968976c25fb","Type":"ContainerDied","Data":"f1f8754c5384bd933de1355ed0d4210b1fe7bc06bbbe4e8dc3bb20c9c6ae8499"} Feb 23 13:06:25.314461 master-0 kubenswrapper[7845]: I0223 13:06:25.310943 7845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:06:25.315353 master-0 kubenswrapper[7845]: I0223 13:06:25.315227 7845 generic.go:334] "Generic (PLEG): container finished" podID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerID="6b9a9413c5c8e23acbf2ca9c481f0a8343082b1a43fb4299b0e86fce5894a54d" exitCode=0 Feb 23 13:06:25.315785 master-0 kubenswrapper[7845]: I0223 13:06:25.315494 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shjn2" event={"ID":"1d40e8ca-222b-4e41-b1c9-86291193147a","Type":"ContainerDied","Data":"6b9a9413c5c8e23acbf2ca9c481f0a8343082b1a43fb4299b0e86fce5894a54d"} Feb 23 13:06:25.322864 master-0 kubenswrapper[7845]: I0223 13:06:25.322807 7845 generic.go:334] "Generic (PLEG): container finished" podID="b48d5b87-189b-45b6-ba55-37bd22d59eb6" containerID="0cd30e8676779569aa21305583cf916e9593358a307866f2fe5ad8cf68542eb9" exitCode=0 Feb 23 13:06:25.322969 master-0 kubenswrapper[7845]: I0223 13:06:25.322904 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxqsd" event={"ID":"b48d5b87-189b-45b6-ba55-37bd22d59eb6","Type":"ContainerDied","Data":"0cd30e8676779569aa21305583cf916e9593358a307866f2fe5ad8cf68542eb9"} Feb 23 13:06:25.326733 master-0 kubenswrapper[7845]: I0223 13:06:25.326652 7845 generic.go:334] "Generic (PLEG): container finished" podID="0128982b-01b4-49cb-ab4a-8759b844c86b" containerID="13f118397154c0722bc4d67c0e8029845516c7227b9d9347ffbb69f6316914e4" exitCode=0 Feb 23 13:06:25.326797 master-0 kubenswrapper[7845]: I0223 13:06:25.326736 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrhg" event={"ID":"0128982b-01b4-49cb-ab4a-8759b844c86b","Type":"ContainerDied","Data":"13f118397154c0722bc4d67c0e8029845516c7227b9d9347ffbb69f6316914e4"} Feb 23 13:06:25.330907 master-0 kubenswrapper[7845]: I0223 13:06:25.330855 7845 generic.go:334] "Generic (PLEG): container finished" podID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerID="652d473cdab52f393fe4242041de885641a475619de89058cd263fc1d5b3ca35" exitCode=0 Feb 23 13:06:25.330907 master-0 kubenswrapper[7845]: I0223 13:06:25.330908 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccxr7" event={"ID":"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4","Type":"ContainerDied","Data":"652d473cdab52f393fe4242041de885641a475619de89058cd263fc1d5b3ca35"} Feb 23 13:06:25.390315 master-0 kubenswrapper[7845]: I0223 13:06:25.388850 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" podStartSLOduration=3.390793036 podStartE2EDuration="29.388826249s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="2026-02-23 13:05:58.093857229 +0000 UTC m=+292.089588090" lastFinishedPulling="2026-02-23 13:06:24.091890432 +0000 UTC m=+318.087621303" observedRunningTime="2026-02-23 13:06:25.357347525 +0000 UTC m=+319.353078396" watchObservedRunningTime="2026-02-23 13:06:25.388826249 +0000 UTC m=+319.384557140" Feb 23 13:06:25.667376 master-0 kubenswrapper[7845]: I0223 13:06:25.667331 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:06:25.687836 master-0 kubenswrapper[7845]: I0223 13:06:25.687787 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:06:25.860368 master-0 kubenswrapper[7845]: I0223 13:06:25.860298 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7fg\" (UniqueName: \"kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg\") pod \"1d40e8ca-222b-4e41-b1c9-86291193147a\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " Feb 23 13:06:25.860448 master-0 kubenswrapper[7845]: I0223 13:06:25.860415 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content\") pod \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " Feb 23 13:06:25.860496 master-0 kubenswrapper[7845]: I0223 13:06:25.860467 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities\") pod \"1d40e8ca-222b-4e41-b1c9-86291193147a\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " Feb 23 13:06:25.860531 master-0 kubenswrapper[7845]: I0223 13:06:25.860510 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jhh8\" (UniqueName: \"kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8\") pod \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " Feb 23 13:06:25.860564 master-0 kubenswrapper[7845]: I0223 13:06:25.860550 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities\") pod \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\" (UID: \"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4\") " Feb 23 13:06:25.860597 master-0 kubenswrapper[7845]: I0223 13:06:25.860584 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content\") pod \"1d40e8ca-222b-4e41-b1c9-86291193147a\" (UID: \"1d40e8ca-222b-4e41-b1c9-86291193147a\") " Feb 23 13:06:25.862213 master-0 kubenswrapper[7845]: I0223 13:06:25.861481 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities" (OuterVolumeSpecName: "utilities") pod "1d40e8ca-222b-4e41-b1c9-86291193147a" (UID: "1d40e8ca-222b-4e41-b1c9-86291193147a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:06:25.862330 master-0 kubenswrapper[7845]: I0223 13:06:25.862299 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities" (OuterVolumeSpecName: "utilities") pod "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" (UID: "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:06:25.864558 master-0 kubenswrapper[7845]: I0223 13:06:25.864508 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg" (OuterVolumeSpecName: "kube-api-access-8c7fg") pod "1d40e8ca-222b-4e41-b1c9-86291193147a" (UID: "1d40e8ca-222b-4e41-b1c9-86291193147a"). InnerVolumeSpecName "kube-api-access-8c7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:25.865202 master-0 kubenswrapper[7845]: I0223 13:06:25.865162 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8" (OuterVolumeSpecName: "kube-api-access-7jhh8") pod "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" (UID: "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4"). InnerVolumeSpecName "kube-api-access-7jhh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:25.895919 master-0 kubenswrapper[7845]: I0223 13:06:25.895860 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" (UID: "4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:06:25.962583 master-0 kubenswrapper[7845]: I0223 13:06:25.961813 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c7fg\" (UniqueName: \"kubernetes.io/projected/1d40e8ca-222b-4e41-b1c9-86291193147a-kube-api-access-8c7fg\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:25.962583 master-0 kubenswrapper[7845]: I0223 13:06:25.961849 7845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:25.962583 master-0 kubenswrapper[7845]: I0223 13:06:25.961861 7845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-utilities\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:25.962583 master-0 kubenswrapper[7845]: I0223 13:06:25.961872 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jhh8\" (UniqueName: \"kubernetes.io/projected/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-kube-api-access-7jhh8\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:25.962583 master-0 kubenswrapper[7845]: I0223 13:06:25.961882 7845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4-utilities\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:26.043912 master-0 kubenswrapper[7845]: I0223 13:06:26.043841 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d40e8ca-222b-4e41-b1c9-86291193147a" (UID: "1d40e8ca-222b-4e41-b1c9-86291193147a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:06:26.062958 master-0 kubenswrapper[7845]: I0223 13:06:26.062667 7845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d40e8ca-222b-4e41-b1c9-86291193147a-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:26.215218 master-0 kubenswrapper[7845]: I0223 13:06:26.215162 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac71a3d-cfbb-49d2-9a5c-c3ed714a933e" path="/var/lib/kubelet/pods/fac71a3d-cfbb-49d2-9a5c-c3ed714a933e/volumes" Feb 23 13:06:26.341401 master-0 kubenswrapper[7845]: I0223 13:06:26.341316 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"7d827c55f4a02f2868cdfecb40bff1ab4ec8bc10c95a9e1fc8eee9bfba97732d"} Feb 23 13:06:26.341401 master-0 kubenswrapper[7845]: I0223 13:06:26.341396 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"e30f446bb2714d380fa7909fd4a0293b5a66a259d785eaa0ff99a8d5b7fba280"} Feb 23 13:06:26.341401 master-0 kubenswrapper[7845]: I0223 13:06:26.341412 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"44b7755ac7e8a439ff0fc3edb598f7964183e231ff745d6b5c721bfaa7e89066"} Feb 23 13:06:26.341401 master-0 kubenswrapper[7845]: I0223 13:06:26.341422 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"f4152c7de869df80f0c905cfd7a6252eb8e9e684fe6b9642981a93d71e896532"} Feb 23 13:06:26.344514 master-0 kubenswrapper[7845]: I0223 13:06:26.343700 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shjn2" event={"ID":"1d40e8ca-222b-4e41-b1c9-86291193147a","Type":"ContainerDied","Data":"00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33"} Feb 23 13:06:26.344514 master-0 kubenswrapper[7845]: I0223 13:06:26.343760 7845 scope.go:117] "RemoveContainer" containerID="6b9a9413c5c8e23acbf2ca9c481f0a8343082b1a43fb4299b0e86fce5894a54d" Feb 23 13:06:26.344514 master-0 kubenswrapper[7845]: I0223 13:06:26.343760 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shjn2" Feb 23 13:06:26.351595 master-0 kubenswrapper[7845]: I0223 13:06:26.351526 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxqsd" event={"ID":"b48d5b87-189b-45b6-ba55-37bd22d59eb6","Type":"ContainerStarted","Data":"09406b35a08959221b57f67d606490bebcad8cbca94e120038b2f19515d03c24"} Feb 23 13:06:26.353574 master-0 kubenswrapper[7845]: I0223 13:06:26.353538 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccxr7" event={"ID":"4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4","Type":"ContainerDied","Data":"ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5"} Feb 23 13:06:26.353646 master-0 kubenswrapper[7845]: I0223 13:06:26.353620 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccxr7" Feb 23 13:06:26.364044 master-0 kubenswrapper[7845]: I0223 13:06:26.363971 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrhg" event={"ID":"0128982b-01b4-49cb-ab4a-8759b844c86b","Type":"ContainerStarted","Data":"6d25ee30b6e558d6e17ed8f089e9a32ebdd806c414b14ab2676f5d6036462f3b"} Feb 23 13:06:26.367431 master-0 kubenswrapper[7845]: I0223 13:06:26.367402 7845 scope.go:117] "RemoveContainer" containerID="d327710529f59d8c9da3bd6a73015ea11137381731e99ad4d928fa1511eb2b90" Feb 23 13:06:26.367545 master-0 kubenswrapper[7845]: I0223 13:06:26.367501 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8xxs" event={"ID":"9c3f9dc5-d10d-452c-bf5d-c5830a444617","Type":"ContainerStarted","Data":"0ed5293fed0e9b927e2b467df392e8857f10552b71641db8e4c7533097fe7311"} Feb 23 13:06:26.371478 master-0 kubenswrapper[7845]: I0223 13:06:26.371433 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mldw4" event={"ID":"29908b4a-0df5-4c46-b886-c968976c25fb","Type":"ContainerStarted","Data":"ecbaaafb69f12d4c763205504b8c489c8beb83ed166e8d3f0fa9f85e41507799"} Feb 23 13:06:26.381191 master-0 kubenswrapper[7845]: I0223 13:06:26.381077 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" podStartSLOduration=2.381047454 podStartE2EDuration="2.381047454s" podCreationTimestamp="2026-02-23 13:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:26.376552268 +0000 UTC m=+320.372283129" watchObservedRunningTime="2026-02-23 13:06:26.381047454 +0000 UTC m=+320.376778325" Feb 23 13:06:26.404452 master-0 kubenswrapper[7845]: I0223 13:06:26.404008 7845 scope.go:117] "RemoveContainer" containerID="652d473cdab52f393fe4242041de885641a475619de89058cd263fc1d5b3ca35" Feb 23 13:06:26.433355 master-0 kubenswrapper[7845]: I0223 13:06:26.433219 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:06:26.441517 master-0 kubenswrapper[7845]: I0223 13:06:26.441466 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shjn2"] Feb 23 13:06:26.444489 master-0 kubenswrapper[7845]: I0223 13:06:26.444456 7845 scope.go:117] "RemoveContainer" containerID="568a3a11e000578b5ac04304482dc130dccde359b178556f465a305ccc23db65" Feb 23 13:06:26.475593 master-0 kubenswrapper[7845]: I0223 13:06:26.475480 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mldw4" podStartSLOduration=3.785228803 podStartE2EDuration="30.475444827s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="2026-02-23 13:05:59.086085395 +0000 UTC m=+293.081816266" lastFinishedPulling="2026-02-23 13:06:25.776301419 +0000 UTC m=+319.772032290" observedRunningTime="2026-02-23 13:06:26.466845575 +0000 UTC m=+320.462576436" watchObservedRunningTime="2026-02-23 13:06:26.475444827 +0000 UTC m=+320.471175698" Feb 23 13:06:26.514588 master-0 kubenswrapper[7845]: I0223 13:06:26.512978 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfrhg" podStartSLOduration=2.809413211 podStartE2EDuration="30.512943291s" podCreationTimestamp="2026-02-23 13:05:56 +0000 UTC" firstStartedPulling="2026-02-23 13:05:58.07104462 +0000 UTC m=+292.066775491" lastFinishedPulling="2026-02-23 13:06:25.77457468 +0000 UTC m=+319.770305571" observedRunningTime="2026-02-23 13:06:26.501460338 +0000 UTC m=+320.497191209" watchObservedRunningTime="2026-02-23 13:06:26.512943291 +0000 UTC m=+320.508674182" Feb 23 13:06:26.541784 master-0 kubenswrapper[7845]: I0223 13:06:26.541696 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r8xxs" podStartSLOduration=2.898132116 podStartE2EDuration="25.541672878s" podCreationTimestamp="2026-02-23 13:06:01 +0000 UTC" firstStartedPulling="2026-02-23 13:06:03.133438976 +0000 UTC m=+297.129169847" lastFinishedPulling="2026-02-23 13:06:25.776979728 +0000 UTC m=+319.772710609" observedRunningTime="2026-02-23 13:06:26.536181864 +0000 UTC m=+320.531912745" watchObservedRunningTime="2026-02-23 13:06:26.541672878 +0000 UTC m=+320.537403749" Feb 23 13:06:26.562274 master-0 kubenswrapper[7845]: I0223 13:06:26.562163 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bxqsd" podStartSLOduration=3.747399377 podStartE2EDuration="24.562140354s" podCreationTimestamp="2026-02-23 13:06:02 +0000 UTC" firstStartedPulling="2026-02-23 13:06:05.166198703 +0000 UTC m=+299.161929564" lastFinishedPulling="2026-02-23 13:06:25.98093967 +0000 UTC m=+319.976670541" observedRunningTime="2026-02-23 13:06:26.558127231 +0000 UTC m=+320.553858102" watchObservedRunningTime="2026-02-23 13:06:26.562140354 +0000 UTC m=+320.557871225" Feb 23 13:06:26.598386 master-0 kubenswrapper[7845]: I0223 13:06:26.598324 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:06:26.606749 master-0 kubenswrapper[7845]: I0223 13:06:26.606694 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccxr7"] Feb 23 13:06:27.157368 master-0 kubenswrapper[7845]: I0223 13:06:27.157282 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:27.157665 master-0 kubenswrapper[7845]: I0223 13:06:27.157526 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:27.257461 master-0 kubenswrapper[7845]: I0223 13:06:27.257410 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:27.257461 master-0 kubenswrapper[7845]: I0223 13:06:27.257453 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:28.211509 master-0 kubenswrapper[7845]: I0223 13:06:28.211401 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" path="/var/lib/kubelet/pods/1d40e8ca-222b-4e41-b1c9-86291193147a/volumes" Feb 23 13:06:28.212310 master-0 kubenswrapper[7845]: I0223 13:06:28.212127 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" path="/var/lib/kubelet/pods/4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4/volumes" Feb 23 13:06:28.224648 master-0 kubenswrapper[7845]: I0223 13:06:28.224578 7845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sfrhg" podUID="0128982b-01b4-49cb-ab4a-8759b844c86b" containerName="registry-server" probeResult="failure" output=< Feb 23 13:06:28.224648 master-0 kubenswrapper[7845]: timeout: failed to connect service ":50051" within 1s Feb 23 13:06:28.224648 master-0 kubenswrapper[7845]: > Feb 23 13:06:28.302098 master-0 kubenswrapper[7845]: I0223 13:06:28.301976 7845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mldw4" podUID="29908b4a-0df5-4c46-b886-c968976c25fb" containerName="registry-server" probeResult="failure" output=< Feb 23 13:06:28.302098 master-0 kubenswrapper[7845]: timeout: failed to connect service ":50051" within 1s Feb 23 13:06:28.302098 master-0 kubenswrapper[7845]: > Feb 23 13:06:30.418183 master-0 kubenswrapper[7845]: I0223 13:06:30.418078 7845 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:06:30.419000 master-0 kubenswrapper[7845]: I0223 13:06:30.418493 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e" gracePeriod=30 Feb 23 13:06:30.419000 master-0 kubenswrapper[7845]: I0223 13:06:30.418623 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://dfd86a94ccff1eeb13e1ddaabeeeb38c3d4bc54e7d5689b425d76ab48acf7562" gracePeriod=30 Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.421621 7845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.421932 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.421950 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.421982 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.421994 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422012 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422023 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422037 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerName="extract-utilities" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422048 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerName="extract-utilities" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422064 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422075 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422093 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422102 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422121 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422132 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422147 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422157 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: E0223 13:06:30.422169 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerName="extract-utilities" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422179 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerName="extract-utilities" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422366 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a87dfd2-d2d6-4359-96f8-bf01a5d7b9a4" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422385 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422400 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422416 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422439 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422450 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d40e8ca-222b-4e41-b1c9-86291193147a" containerName="extract-content" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.422737 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:30.425283 master-0 kubenswrapper[7845]: I0223 13:06:30.423903 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.528362 master-0 kubenswrapper[7845]: I0223 13:06:30.527573 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.528362 master-0 kubenswrapper[7845]: I0223 13:06:30.527650 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.629036 master-0 kubenswrapper[7845]: I0223 13:06:30.628877 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.629279 master-0 kubenswrapper[7845]: I0223 13:06:30.629034 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.629279 master-0 kubenswrapper[7845]: I0223 13:06:30.629138 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.629381 master-0 kubenswrapper[7845]: I0223 13:06:30.629240 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.667182 master-0 kubenswrapper[7845]: I0223 13:06:30.667108 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:30.676943 master-0 kubenswrapper[7845]: I0223 13:06:30.676848 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:06:31.067155 master-0 kubenswrapper[7845]: I0223 13:06:31.067104 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:06:31.106223 master-0 kubenswrapper[7845]: I0223 13:06:31.106104 7845 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="cc724954-8885-48bc-96dd-eb85b33713e6" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.236957 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237058 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237104 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237130 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237152 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237195 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237216 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237258 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237283 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.237341 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.238053 7845 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.238068 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.238077 7845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.238085 7845 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:31.238293 master-0 kubenswrapper[7845]: I0223 13:06:31.238095 7845 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:31.409185 master-0 kubenswrapper[7845]: I0223 13:06:31.409138 7845 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="dfd86a94ccff1eeb13e1ddaabeeeb38c3d4bc54e7d5689b425d76ab48acf7562" exitCode=0 Feb 23 13:06:31.409185 master-0 kubenswrapper[7845]: I0223 13:06:31.409173 7845 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e" exitCode=0 Feb 23 13:06:31.409462 master-0 kubenswrapper[7845]: I0223 13:06:31.409228 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5" Feb 23 13:06:31.409462 master-0 kubenswrapper[7845]: I0223 13:06:31.409279 7845 scope.go:117] "RemoveContainer" containerID="611039cddaab573cdf7f17e37d453d213099869d69ffbabcba17a4b655a9aee4" Feb 23 13:06:31.409462 master-0 kubenswrapper[7845]: I0223 13:06:31.409403 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 23 13:06:31.415144 master-0 kubenswrapper[7845]: I0223 13:06:31.415098 7845 generic.go:334] "Generic (PLEG): container finished" podID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerID="19aea6b0c64c2242c1162a5644f9c7d995fa9caa7710602094da7d8d77b66e03" exitCode=0 Feb 23 13:06:31.415240 master-0 kubenswrapper[7845]: I0223 13:06:31.415164 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ce5fa293-4526-4dd9-a0e4-a1db7d667092","Type":"ContainerDied","Data":"19aea6b0c64c2242c1162a5644f9c7d995fa9caa7710602094da7d8d77b66e03"} Feb 23 13:06:31.417112 master-0 kubenswrapper[7845]: I0223 13:06:31.417054 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} Feb 23 13:06:31.417207 master-0 kubenswrapper[7845]: I0223 13:06:31.417116 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} Feb 23 13:06:31.417207 master-0 kubenswrapper[7845]: I0223 13:06:31.417132 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"3dcb59345b5bc0117b6a00f1149c42a48da8235be304949c4a08edf500dfc736"} Feb 23 13:06:32.101524 master-0 kubenswrapper[7845]: I0223 13:06:32.101426 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:32.102038 master-0 kubenswrapper[7845]: I0223 13:06:32.102015 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:32.142315 master-0 kubenswrapper[7845]: I0223 13:06:32.142231 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:32.212859 master-0 kubenswrapper[7845]: I0223 13:06:32.212795 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 23 13:06:32.213377 master-0 kubenswrapper[7845]: I0223 13:06:32.213340 7845 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 23 13:06:32.228983 master-0 kubenswrapper[7845]: I0223 13:06:32.228930 7845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:06:32.228983 master-0 kubenswrapper[7845]: I0223 13:06:32.228969 7845 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="cc724954-8885-48bc-96dd-eb85b33713e6" Feb 23 13:06:32.233025 master-0 kubenswrapper[7845]: I0223 13:06:32.232943 7845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 23 13:06:32.233097 master-0 kubenswrapper[7845]: I0223 13:06:32.233018 7845 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="cc724954-8885-48bc-96dd-eb85b33713e6" Feb 23 13:06:32.427013 master-0 kubenswrapper[7845]: I0223 13:06:32.426837 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} Feb 23 13:06:32.427013 master-0 kubenswrapper[7845]: I0223 13:06:32.426908 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} Feb 23 13:06:32.465296 master-0 kubenswrapper[7845]: I0223 13:06:32.464066 7845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.464043425 podStartE2EDuration="2.464043425s" podCreationTimestamp="2026-02-23 13:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:32.462013228 +0000 UTC m=+326.457744149" watchObservedRunningTime="2026-02-23 13:06:32.464043425 +0000 UTC m=+326.459774306" Feb 23 13:06:32.494337 master-0 kubenswrapper[7845]: I0223 13:06:32.494146 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:32.766957 master-0 kubenswrapper[7845]: I0223 13:06:32.766890 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:06:32.863143 master-0 kubenswrapper[7845]: I0223 13:06:32.863086 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock\") pod \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " Feb 23 13:06:32.863411 master-0 kubenswrapper[7845]: I0223 13:06:32.863174 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access\") pod \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " Feb 23 13:06:32.863411 master-0 kubenswrapper[7845]: I0223 13:06:32.863216 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir\") pod \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\" (UID: \"ce5fa293-4526-4dd9-a0e4-a1db7d667092\") " Feb 23 13:06:32.863411 master-0 kubenswrapper[7845]: I0223 13:06:32.863345 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock" (OuterVolumeSpecName: "var-lock") pod "ce5fa293-4526-4dd9-a0e4-a1db7d667092" (UID: "ce5fa293-4526-4dd9-a0e4-a1db7d667092"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:32.863544 master-0 kubenswrapper[7845]: I0223 13:06:32.863431 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce5fa293-4526-4dd9-a0e4-a1db7d667092" (UID: "ce5fa293-4526-4dd9-a0e4-a1db7d667092"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:32.863900 master-0 kubenswrapper[7845]: I0223 13:06:32.863851 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:32.863900 master-0 kubenswrapper[7845]: I0223 13:06:32.863894 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:32.866506 master-0 kubenswrapper[7845]: I0223 13:06:32.866459 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce5fa293-4526-4dd9-a0e4-a1db7d667092" (UID: "ce5fa293-4526-4dd9-a0e4-a1db7d667092"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:32.965460 master-0 kubenswrapper[7845]: I0223 13:06:32.965385 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce5fa293-4526-4dd9-a0e4-a1db7d667092-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:33.388018 master-0 kubenswrapper[7845]: I0223 13:06:33.387907 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:33.388018 master-0 kubenswrapper[7845]: I0223 13:06:33.388018 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:33.442661 master-0 kubenswrapper[7845]: I0223 13:06:33.442523 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ce5fa293-4526-4dd9-a0e4-a1db7d667092","Type":"ContainerDied","Data":"843d775bbad7c7fe41df23fb96ec59c3909440741cf205f5eb1b07a6fc2a50c5"} Feb 23 13:06:33.442661 master-0 kubenswrapper[7845]: I0223 13:06:33.442595 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843d775bbad7c7fe41df23fb96ec59c3909440741cf205f5eb1b07a6fc2a50c5" Feb 23 13:06:33.442661 master-0 kubenswrapper[7845]: I0223 13:06:33.442595 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:06:33.464915 master-0 kubenswrapper[7845]: I0223 13:06:33.464819 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:33.531414 master-0 kubenswrapper[7845]: I0223 13:06:33.531345 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:37.233163 master-0 kubenswrapper[7845]: I0223 13:06:37.233087 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:37.309641 master-0 kubenswrapper[7845]: I0223 13:06:37.309483 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:37.322387 master-0 kubenswrapper[7845]: I0223 13:06:37.322333 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:37.384865 master-0 kubenswrapper[7845]: I0223 13:06:37.384816 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:38.244085 master-0 kubenswrapper[7845]: E0223 13:06:38.243979 7845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[auth-proxy-config], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" Feb 23 13:06:38.486871 master-0 kubenswrapper[7845]: I0223 13:06:38.486787 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:39.718372 master-0 kubenswrapper[7845]: I0223 13:06:39.718270 7845 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:06:39.719177 master-0 kubenswrapper[7845]: I0223 13:06:39.718762 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33" gracePeriod=15 Feb 23 13:06:39.719177 master-0 kubenswrapper[7845]: I0223 13:06:39.718901 7845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe" gracePeriod=15 Feb 23 13:06:39.721164 master-0 kubenswrapper[7845]: I0223 13:06:39.721098 7845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:06:39.721945 master-0 kubenswrapper[7845]: E0223 13:06:39.721895 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:39.722101 master-0 kubenswrapper[7845]: I0223 13:06:39.721940 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:39.722101 master-0 kubenswrapper[7845]: E0223 13:06:39.722036 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:39.722326 master-0 kubenswrapper[7845]: I0223 13:06:39.722104 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:39.722326 master-0 kubenswrapper[7845]: E0223 13:06:39.722128 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:39.722326 master-0 kubenswrapper[7845]: I0223 13:06:39.722191 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:39.722326 master-0 kubenswrapper[7845]: E0223 13:06:39.722226 7845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:39.722326 master-0 kubenswrapper[7845]: I0223 13:06:39.722316 7845 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:39.722909 master-0 kubenswrapper[7845]: I0223 13:06:39.722824 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:39.723051 master-0 kubenswrapper[7845]: I0223 13:06:39.722914 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:39.723051 master-0 kubenswrapper[7845]: I0223 13:06:39.722992 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:39.723283 master-0 kubenswrapper[7845]: I0223 13:06:39.723079 7845 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:39.727140 master-0 kubenswrapper[7845]: I0223 13:06:39.727046 7845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:06:39.727490 master-0 kubenswrapper[7845]: I0223 13:06:39.727285 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.728174 master-0 kubenswrapper[7845]: I0223 13:06:39.728126 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.878100 master-0 kubenswrapper[7845]: I0223 13:06:39.877939 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.878434 master-0 kubenswrapper[7845]: I0223 13:06:39.878158 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.878434 master-0 kubenswrapper[7845]: I0223 13:06:39.878226 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.878434 master-0 kubenswrapper[7845]: I0223 13:06:39.878314 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.878434 master-0 kubenswrapper[7845]: I0223 13:06:39.878368 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.878754 master-0 kubenswrapper[7845]: I0223 13:06:39.878512 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.878754 master-0 kubenswrapper[7845]: I0223 13:06:39.878674 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.878754 master-0 kubenswrapper[7845]: I0223 13:06:39.878713 7845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980018 master-0 kubenswrapper[7845]: I0223 13:06:39.979862 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980018 master-0 kubenswrapper[7845]: I0223 13:06:39.979941 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980018 master-0 kubenswrapper[7845]: I0223 13:06:39.979970 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980018 master-0 kubenswrapper[7845]: I0223 13:06:39.979991 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980018 master-0 kubenswrapper[7845]: I0223 13:06:39.980022 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980498 master-0 kubenswrapper[7845]: I0223 13:06:39.980053 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980498 master-0 kubenswrapper[7845]: I0223 13:06:39.980166 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980498 master-0 kubenswrapper[7845]: I0223 13:06:39.980271 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980498 master-0 kubenswrapper[7845]: I0223 13:06:39.980411 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980498 master-0 kubenswrapper[7845]: I0223 13:06:39.980483 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980712 master-0 kubenswrapper[7845]: I0223 13:06:39.980556 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:39.980712 master-0 kubenswrapper[7845]: I0223 13:06:39.980591 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980712 master-0 kubenswrapper[7845]: I0223 13:06:39.980625 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980712 master-0 kubenswrapper[7845]: I0223 13:06:39.980658 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980869 master-0 kubenswrapper[7845]: I0223 13:06:39.980756 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:39.980869 master-0 kubenswrapper[7845]: I0223 13:06:39.980768 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:40.506440 master-0 kubenswrapper[7845]: I0223 13:06:40.506313 7845 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe" exitCode=0 Feb 23 13:06:40.668440 master-0 kubenswrapper[7845]: I0223 13:06:40.668362 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.668440 master-0 kubenswrapper[7845]: I0223 13:06:40.668438 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.668932 master-0 kubenswrapper[7845]: I0223 13:06:40.668461 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.668932 master-0 kubenswrapper[7845]: I0223 13:06:40.668487 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.677656 master-0 kubenswrapper[7845]: I0223 13:06:40.677564 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.678678 master-0 kubenswrapper[7845]: I0223 13:06:40.678598 7845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:40.738610 master-0 kubenswrapper[7845]: I0223 13:06:40.738550 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:40.745442 master-0 kubenswrapper[7845]: I0223 13:06:40.745379 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:06:40.745994 master-0 kubenswrapper[7845]: I0223 13:06:40.745948 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:40.750435 master-0 kubenswrapper[7845]: I0223 13:06:40.749017 7845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:06:40.773361 master-0 kubenswrapper[7845]: E0223 13:06:40.772928 7845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc2e50127_3c2e_4514_ace5_2cf6f9223abf.slice/crio-conmon-87320ceaa2976029b0853261379f23dc5fc274ad76d399f47415010358a9fd41.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:06:40.807647 master-0 kubenswrapper[7845]: W0223 13:06:40.807605 7845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded33f74deb6fdef2cfa169d8db13e51c.slice/crio-a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2 WatchSource:0}: Error finding container a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2: Status 404 returned error can't find the container with id a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2 Feb 23 13:06:41.515381 master-0 kubenswrapper[7845]: I0223 13:06:41.515166 7845 generic.go:334] "Generic (PLEG): container finished" podID="c2e50127-3c2e-4514-ace5-2cf6f9223abf" containerID="87320ceaa2976029b0853261379f23dc5fc274ad76d399f47415010358a9fd41" exitCode=0 Feb 23 13:06:41.515381 master-0 kubenswrapper[7845]: I0223 13:06:41.515286 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"c2e50127-3c2e-4514-ace5-2cf6f9223abf","Type":"ContainerDied","Data":"87320ceaa2976029b0853261379f23dc5fc274ad76d399f47415010358a9fd41"} Feb 23 13:06:41.517477 master-0 kubenswrapper[7845]: I0223 13:06:41.517424 7845 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78" exitCode=0 Feb 23 13:06:41.517607 master-0 kubenswrapper[7845]: I0223 13:06:41.517499 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerDied","Data":"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78"} Feb 23 13:06:41.517607 master-0 kubenswrapper[7845]: I0223 13:06:41.517528 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2"} Feb 23 13:06:41.521509 master-0 kubenswrapper[7845]: I0223 13:06:41.521437 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"39fda2f491fa2a50f4f315b834ed6d23","Type":"ContainerStarted","Data":"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f"} Feb 23 13:06:41.521737 master-0 kubenswrapper[7845]: I0223 13:06:41.521708 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"39fda2f491fa2a50f4f315b834ed6d23","Type":"ContainerStarted","Data":"1e4a89c63867c66249f3be8d13ff9c7bfaab9b37c45169bdf97b3f2b62ddd38e"} Feb 23 13:06:41.528935 master-0 kubenswrapper[7845]: I0223 13:06:41.528897 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:41.529190 master-0 kubenswrapper[7845]: I0223 13:06:41.529160 7845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:41.818792 master-0 kubenswrapper[7845]: I0223 13:06:41.818723 7845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:41.820724 master-0 kubenswrapper[7845]: I0223 13:06:41.819523 7845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:42.089505 master-0 kubenswrapper[7845]: W0223 13:06:42.089340 7845 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-5499c": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-5499c&resourceVersion=9539": dial tcp 192.168.32.10:6443: connect: connection refused Feb 23 13:06:42.089505 master-0 kubenswrapper[7845]: E0223 13:06:42.089466 7845 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-5499c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-5499c&resourceVersion=9539\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:06:42.529880 master-0 kubenswrapper[7845]: I0223 13:06:42.529443 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e"} Feb 23 13:06:42.810833 master-0 kubenswrapper[7845]: I0223 13:06:42.810711 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:42.933177 master-0 kubenswrapper[7845]: I0223 13:06:42.932998 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access\") pod \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933221 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock\") pod \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933280 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir\") pod \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\" (UID: \"c2e50127-3c2e-4514-ace5-2cf6f9223abf\") " Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933286 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock" (OuterVolumeSpecName: "var-lock") pod "c2e50127-3c2e-4514-ace5-2cf6f9223abf" (UID: "c2e50127-3c2e-4514-ace5-2cf6f9223abf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933401 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c2e50127-3c2e-4514-ace5-2cf6f9223abf" (UID: "c2e50127-3c2e-4514-ace5-2cf6f9223abf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933499 7845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:42.933865 master-0 kubenswrapper[7845]: I0223 13:06:42.933514 7845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:42.936054 master-0 kubenswrapper[7845]: I0223 13:06:42.935988 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c2e50127-3c2e-4514-ace5-2cf6f9223abf" (UID: "c2e50127-3c2e-4514-ace5-2cf6f9223abf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:06:43.035826 master-0 kubenswrapper[7845]: I0223 13:06:43.035765 7845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2e50127-3c2e-4514-ace5-2cf6f9223abf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:43.120378 master-0 kubenswrapper[7845]: I0223 13:06:43.115085 7845 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 23 13:06:43.166773 master-0 kubenswrapper[7845]: I0223 13:06:43.163941 7845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:43.559393 master-0 kubenswrapper[7845]: I0223 13:06:43.550496 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"c2e50127-3c2e-4514-ace5-2cf6f9223abf","Type":"ContainerDied","Data":"835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06"} Feb 23 13:06:43.559393 master-0 kubenswrapper[7845]: I0223 13:06:43.550549 7845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06" Feb 23 13:06:43.559393 master-0 kubenswrapper[7845]: I0223 13:06:43.550652 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:43.563759 master-0 kubenswrapper[7845]: I0223 13:06:43.563638 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3"} Feb 23 13:06:43.563759 master-0 kubenswrapper[7845]: I0223 13:06:43.563756 7845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4"} Feb 23 13:06:43.996269 master-0 kubenswrapper[7845]: I0223 13:06:43.995959 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:06:44.187065 master-0 kubenswrapper[7845]: I0223 13:06:44.186997 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187065 master-0 kubenswrapper[7845]: I0223 13:06:44.187064 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187103 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187135 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187186 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187219 7845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187270 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets" (OuterVolumeSpecName: "secrets") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187329 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187361 master-0 kubenswrapper[7845]: I0223 13:06:44.187353 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187407 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs" (OuterVolumeSpecName: "logs") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187428 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187445 7845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config" (OuterVolumeSpecName: "config") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187499 7845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187513 7845 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187523 7845 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187532 7845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187540 7845 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.187588 master-0 kubenswrapper[7845]: I0223 13:06:44.187547 7845 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:06:44.212042 master-0 kubenswrapper[7845]: I0223 13:06:44.211984 7845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 23 13:06:44.212545 master-0 kubenswrapper[7845]: I0223 13:06:44.212518 7845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:06:44.585801 master-0 kubenswrapper[7845]: I0223 13:06:44.585743 7845 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33" exitCode=0 Feb 23 13:06:44.586015 master-0 kubenswrapper[7845]: I0223 13:06:44.585858 7845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 23 13:06:46.407350 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 13:06:46.447548 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 23 13:06:46.448026 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 13:06:46.450721 master-0 systemd[1]: kubelet.service: Consumed 49.146s CPU time. Feb 23 13:06:46.473686 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 13:06:46.640387 master-0 kubenswrapper[17411]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:06:46.641405 master-0 kubenswrapper[17411]: I0223 13:06:46.640367 17411 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643478 17411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643503 17411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643510 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643524 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643530 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:06:46.643523 master-0 kubenswrapper[17411]: W0223 13:06:46.643536 17411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643543 17411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643550 17411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643556 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643563 17411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643569 17411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643576 17411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643583 17411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643590 17411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643597 17411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643603 17411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643608 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643616 17411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643624 17411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643629 17411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643635 17411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643640 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643645 17411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643650 17411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643656 17411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:06:46.643901 master-0 kubenswrapper[17411]: W0223 13:06:46.643661 17411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643666 17411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643671 17411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643676 17411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643681 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643686 17411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643691 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643696 17411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643702 17411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643708 17411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643713 17411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643718 17411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643724 17411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643731 17411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643740 17411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643747 17411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643752 17411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643758 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643764 17411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:06:46.645050 master-0 kubenswrapper[17411]: W0223 13:06:46.643770 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643776 17411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643781 17411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643786 17411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643791 17411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643796 17411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643801 17411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643806 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643811 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643816 17411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643822 17411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643828 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643833 17411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643839 17411 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643845 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643859 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643864 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643870 17411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643875 17411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643880 17411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:06:46.646081 master-0 kubenswrapper[17411]: W0223 13:06:46.643886 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643893 17411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643898 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643904 17411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643910 17411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643916 17411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643922 17411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: W0223 13:06:46.643929 17411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644038 17411 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644049 17411 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644058 17411 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644066 17411 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644073 17411 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644080 17411 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644089 17411 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644097 17411 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644104 17411 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644110 17411 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644117 17411 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644123 17411 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644129 17411 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644135 17411 flags.go:64] FLAG: --cgroup-root="" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644141 17411 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 13:06:46.647312 master-0 kubenswrapper[17411]: I0223 13:06:46.644148 17411 flags.go:64] FLAG: --client-ca-file="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644153 17411 flags.go:64] FLAG: --cloud-config="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644159 17411 flags.go:64] FLAG: --cloud-provider="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644165 17411 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644173 17411 flags.go:64] FLAG: --cluster-domain="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644179 17411 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644185 17411 flags.go:64] FLAG: --config-dir="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644192 17411 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644201 17411 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644211 17411 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644220 17411 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644228 17411 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644236 17411 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644265 17411 flags.go:64] FLAG: --contention-profiling="false" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644281 17411 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644291 17411 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644301 17411 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644307 17411 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644316 17411 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644322 17411 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644328 17411 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644334 17411 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644340 17411 flags.go:64] FLAG: --enable-server="true" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644345 17411 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644353 17411 flags.go:64] FLAG: --event-burst="100" Feb 23 13:06:46.648626 master-0 kubenswrapper[17411]: I0223 13:06:46.644360 17411 flags.go:64] FLAG: --event-qps="50" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644365 17411 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644372 17411 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644378 17411 flags.go:64] FLAG: --eviction-hard="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644385 17411 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644390 17411 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644396 17411 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644402 17411 flags.go:64] FLAG: --eviction-soft="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644407 17411 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644418 17411 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644423 17411 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644429 17411 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644435 17411 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644441 17411 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644447 17411 flags.go:64] FLAG: --feature-gates="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644454 17411 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644460 17411 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644466 17411 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644474 17411 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644480 17411 flags.go:64] FLAG: --healthz-port="10248" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644486 17411 flags.go:64] FLAG: --help="false" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644492 17411 flags.go:64] FLAG: --hostname-override="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644499 17411 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644505 17411 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644511 17411 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 13:06:46.649994 master-0 kubenswrapper[17411]: I0223 13:06:46.644516 17411 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644522 17411 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644528 17411 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644533 17411 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644539 17411 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644545 17411 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644551 17411 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644558 17411 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644566 17411 flags.go:64] FLAG: --kube-reserved="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644604 17411 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644612 17411 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644619 17411 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644625 17411 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644632 17411 flags.go:64] FLAG: --lock-file="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644638 17411 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644644 17411 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644654 17411 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644663 17411 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644669 17411 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644675 17411 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644681 17411 flags.go:64] FLAG: --logging-format="text" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644687 17411 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644694 17411 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644700 17411 flags.go:64] FLAG: --manifest-url="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644706 17411 flags.go:64] FLAG: --manifest-url-header="" Feb 23 13:06:46.651499 master-0 kubenswrapper[17411]: I0223 13:06:46.644714 17411 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644720 17411 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644728 17411 flags.go:64] FLAG: --max-pods="110" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644734 17411 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644744 17411 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644750 17411 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644757 17411 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644763 17411 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644770 17411 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644777 17411 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644791 17411 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644797 17411 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644803 17411 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644810 17411 flags.go:64] FLAG: --pod-cidr="" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644816 17411 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644826 17411 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644832 17411 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644838 17411 flags.go:64] FLAG: --pods-per-core="0" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644844 17411 flags.go:64] FLAG: --port="10250" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644850 17411 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644857 17411 flags.go:64] FLAG: --provider-id="" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644862 17411 flags.go:64] FLAG: --qos-reserved="" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644869 17411 flags.go:64] FLAG: --read-only-port="10255" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644877 17411 flags.go:64] FLAG: --register-node="true" Feb 23 13:06:46.652932 master-0 kubenswrapper[17411]: I0223 13:06:46.644883 17411 flags.go:64] FLAG: --register-schedulable="true" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644889 17411 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644898 17411 flags.go:64] FLAG: --registry-burst="10" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644904 17411 flags.go:64] FLAG: --registry-qps="5" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644910 17411 flags.go:64] FLAG: --reserved-cpus="" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644916 17411 flags.go:64] FLAG: --reserved-memory="" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644923 17411 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644929 17411 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644935 17411 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644941 17411 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644948 17411 flags.go:64] FLAG: --runonce="false" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644955 17411 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644961 17411 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644967 17411 flags.go:64] FLAG: --seccomp-default="false" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644972 17411 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644978 17411 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644985 17411 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644991 17411 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.644997 17411 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645003 17411 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645008 17411 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645014 17411 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645020 17411 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645026 17411 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645032 17411 flags.go:64] FLAG: --system-cgroups="" Feb 23 13:06:46.654341 master-0 kubenswrapper[17411]: I0223 13:06:46.645038 17411 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645047 17411 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645054 17411 flags.go:64] FLAG: --tls-cert-file="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645060 17411 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645069 17411 flags.go:64] FLAG: --tls-min-version="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645075 17411 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645084 17411 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645090 17411 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645096 17411 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645103 17411 flags.go:64] FLAG: --v="2" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645110 17411 flags.go:64] FLAG: --version="false" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645118 17411 flags.go:64] FLAG: --vmodule="" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645124 17411 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: I0223 13:06:46.645131 17411 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645309 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645317 17411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645323 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645329 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645336 17411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645343 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645349 17411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645355 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:06:46.655642 master-0 kubenswrapper[17411]: W0223 13:06:46.645360 17411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645366 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645371 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645377 17411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645383 17411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645390 17411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645395 17411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645400 17411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645405 17411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645410 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645416 17411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645421 17411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645426 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645431 17411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645436 17411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645441 17411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645448 17411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645453 17411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645458 17411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645463 17411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:06:46.656847 master-0 kubenswrapper[17411]: W0223 13:06:46.645470 17411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645476 17411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645482 17411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645487 17411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645492 17411 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645497 17411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645503 17411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645508 17411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645514 17411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645519 17411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645524 17411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645531 17411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645537 17411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645542 17411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645547 17411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645552 17411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645559 17411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645565 17411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645572 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645578 17411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:06:46.658126 master-0 kubenswrapper[17411]: W0223 13:06:46.645585 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645593 17411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645600 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645607 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645612 17411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645617 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645623 17411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645628 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645635 17411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645641 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645648 17411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645654 17411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645661 17411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645666 17411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645671 17411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645676 17411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645681 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645686 17411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645691 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:06:46.659214 master-0 kubenswrapper[17411]: W0223 13:06:46.645696 17411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.645701 17411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.645706 17411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.645711 17411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.645716 17411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: I0223 13:06:46.645735 17411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: I0223 13:06:46.654147 17411 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: I0223 13:06:46.654211 17411 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654403 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654425 17411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654435 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654444 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654454 17411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654464 17411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654473 17411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:06:46.660490 master-0 kubenswrapper[17411]: W0223 13:06:46.654482 17411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654489 17411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654498 17411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654506 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654515 17411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654522 17411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654530 17411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654538 17411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654549 17411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654560 17411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654571 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654579 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654588 17411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654597 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654605 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654613 17411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654622 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654629 17411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654637 17411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654646 17411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:06:46.661571 master-0 kubenswrapper[17411]: W0223 13:06:46.654654 17411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654661 17411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654669 17411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654678 17411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654688 17411 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654696 17411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654704 17411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654712 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654719 17411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654727 17411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654735 17411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654745 17411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654757 17411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654766 17411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654775 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654784 17411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654794 17411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654804 17411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654814 17411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:06:46.662703 master-0 kubenswrapper[17411]: W0223 13:06:46.654824 17411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654834 17411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654845 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654855 17411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654863 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654871 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654879 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654887 17411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654895 17411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654903 17411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654910 17411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654918 17411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654927 17411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654935 17411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654943 17411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654951 17411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654960 17411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654968 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654977 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654984 17411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:06:46.663968 master-0 kubenswrapper[17411]: W0223 13:06:46.654992 17411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655000 17411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655008 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655016 17411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655024 17411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655031 17411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: I0223 13:06:46.655044 17411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655334 17411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655350 17411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655361 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655371 17411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655383 17411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655393 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655403 17411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:06:46.665590 master-0 kubenswrapper[17411]: W0223 13:06:46.655415 17411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655425 17411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655433 17411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655442 17411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655451 17411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655459 17411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655467 17411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655475 17411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655528 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655537 17411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655545 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655553 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655562 17411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655570 17411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655578 17411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655587 17411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655596 17411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655604 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655612 17411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655621 17411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:06:46.666375 master-0 kubenswrapper[17411]: W0223 13:06:46.655629 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655637 17411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655645 17411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655654 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655663 17411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655672 17411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655681 17411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655690 17411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655698 17411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655706 17411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655715 17411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655723 17411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655731 17411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655741 17411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655749 17411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655757 17411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655765 17411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655773 17411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655781 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655789 17411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:06:46.667483 master-0 kubenswrapper[17411]: W0223 13:06:46.655800 17411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655810 17411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655819 17411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655829 17411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655839 17411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655848 17411 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655858 17411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655866 17411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655876 17411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655885 17411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655894 17411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655903 17411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655912 17411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655919 17411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655927 17411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655935 17411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655943 17411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655951 17411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655959 17411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 23 13:06:46.668602 master-0 kubenswrapper[17411]: W0223 13:06:46.655969 17411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: W0223 13:06:46.655979 17411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: W0223 13:06:46.655988 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: W0223 13:06:46.655996 17411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: W0223 13:06:46.656005 17411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: W0223 13:06:46.656015 17411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.656028 17411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.656375 17411 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.662287 17411 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.662851 17411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.663477 17411 server.go:997] "Starting client certificate rotation" Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.663497 17411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.663729 17411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 08:54:56.237236732 +0000 UTC Feb 23 13:06:46.669674 master-0 kubenswrapper[17411]: I0223 13:06:46.663846 17411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h48m9.573398131s for next certificate rotation Feb 23 13:06:46.670690 master-0 kubenswrapper[17411]: I0223 13:06:46.664728 17411 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:06:46.670690 master-0 kubenswrapper[17411]: I0223 13:06:46.667155 17411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:06:46.670939 master-0 kubenswrapper[17411]: I0223 13:06:46.670864 17411 log.go:25] "Validated CRI v1 runtime API" Feb 23 13:06:46.679574 master-0 kubenswrapper[17411]: I0223 13:06:46.678741 17411 log.go:25] "Validated CRI v1 image API" Feb 23 13:06:46.681419 master-0 kubenswrapper[17411]: I0223 13:06:46.681373 17411 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 13:06:46.698701 master-0 kubenswrapper[17411]: I0223 13:06:46.698583 17411 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 a0645d8c-797c-4e96-9069-34c436b1201e:/dev/vda3] Feb 23 13:06:46.700287 master-0 kubenswrapper[17411]: I0223 13:06:46.698668 17411 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0602a01933c19c27331c4869229405bde10812971f78fe4544f70f84182ff9cb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0602a01933c19c27331c4869229405bde10812971f78fe4544f70f84182ff9cb/userdata/shm major:0 minor:57 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9/userdata/shm major:0 minor:423 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b/userdata/shm major:0 minor:527 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3/userdata/shm major:0 minor:529 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c/userdata/shm major:0 minor:515 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e39861f7eba3a69549695ea713f86bb313f7b6a9495d969cd59f6af1de1fb17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e39861f7eba3a69549695ea713f86bb313f7b6a9495d969cd59f6af1de1fb17/userdata/shm major:0 minor:835 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e4a89c63867c66249f3be8d13ff9c7bfaab9b37c45169bdf97b3f2b62ddd38e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e4a89c63867c66249f3be8d13ff9c7bfaab9b37c45169bdf97b3f2b62ddd38e/userdata/shm major:0 minor:88 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e/userdata/shm major:0 minor:519 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe/userdata/shm major:0 minor:521 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f/userdata/shm major:0 minor:961 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31830e0362f7a4961ccb5574999c9b322d54b8a46c9d7f20c64fbd33df71f3a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31830e0362f7a4961ccb5574999c9b322d54b8a46c9d7f20c64fbd33df71f3a4/userdata/shm major:0 minor:585 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea/userdata/shm major:0 minor:499 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3dcb59345b5bc0117b6a00f1149c42a48da8235be304949c4a08edf500dfc736/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3dcb59345b5bc0117b6a00f1149c42a48da8235be304949c4a08edf500dfc736/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f2f8ec2305a812ab189524192ed5bf86a7bba7a6b18ab8873a325d48aca12f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f2f8ec2305a812ab189524192ed5bf86a7bba7a6b18ab8873a325d48aca12f0/userdata/shm major:0 minor:711 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3/userdata/shm major:0 minor:531 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45f23e7a0d31d2c3d126aa0253e052ced5690e8352ab68bf6cd5ecb2feb526ad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45f23e7a0d31d2c3d126aa0253e052ced5690e8352ab68bf6cd5ecb2feb526ad/userdata/shm major:0 minor:963 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49a6b189f8fbf9c0aa7bb66aa47a22331a8f42d58ff77972bbb9f47a339fc2a5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49a6b189f8fbf9c0aa7bb66aa47a22331a8f42d58ff77972bbb9f47a339fc2a5/userdata/shm major:0 minor:965 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c/userdata/shm major:0 minor:534 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35/userdata/shm major:0 minor:1057 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/623b2142d274970e84b3bbba2aa8e77e527e6d06e0243078dfae6d82495ba0a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/623b2142d274970e84b3bbba2aa8e77e527e6d06e0243078dfae6d82495ba0a1/userdata/shm major:0 minor:852 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/691aedbd28a747f226bebdd350428eca31ef9a07fa5127fd9ae499bd323b6128/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/691aedbd28a747f226bebdd350428eca31ef9a07fa5127fd9ae499bd323b6128/userdata/shm major:0 minor:1100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm major:0 minor:152 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1/userdata/shm major:0 minor:543 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b0568f1af714331492afb936eff9364e4e1b161e76a0c02477b4d75a1981323/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b0568f1af714331492afb936eff9364e4e1b161e76a0c02477b4d75a1981323/userdata/shm major:0 minor:518 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92134e9eac995bc624b7c976d7f3c271d22473d1a0968a654d73191099e3ca2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92134e9eac995bc624b7c976d7f3c271d22473d1a0968a654d73191099e3ca2d/userdata/shm major:0 minor:620 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b/userdata/shm major:0 minor:751 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8422896f1ec2ab46d73c67a22baefed99a0b0d0ea311d5d1f05da3156542ea9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8422896f1ec2ab46d73c67a22baefed99a0b0d0ea311d5d1f05da3156542ea9/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aaa06fef5e54a39c410b76a0809563d32afa3bde2278654961bb3dcb6c8acd54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aaa06fef5e54a39c410b76a0809563d32afa3bde2278654961bb3dcb6c8acd54/userdata/shm major:0 minor:657 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae5797327ba541f955d9212090aad83a203cfcaad025e64f727a371889902b1b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae5797327ba541f955d9212090aad83a203cfcaad025e64f727a371889902b1b/userdata/shm major:0 minor:514 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084/userdata/shm major:0 minor:757 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2/userdata/shm major:0 minor:798 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281/userdata/shm major:0 minor:1046 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0/userdata/shm major:0 minor:947 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c34c0686c926bdae121a0eedb681349d3da6cf0bf3d0236efb47c671f55f2bfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c34c0686c926bdae121a0eedb681349d3da6cf0bf3d0236efb47c671f55f2bfa/userdata/shm major:0 minor:967 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417/userdata/shm major:0 minor:343 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e402396c861028ad44b45bca58dd0a4df2309cc7110b7c0eb008ea09d7318bee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e402396c861028ad44b45bca58dd0a4df2309cc7110b7c0eb008ea09d7318bee/userdata/shm major:0 minor:442 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e5215076a24da7b39e84679bbfcb310a83f91ce7772234df3fcbb41f2f595a40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e5215076a24da7b39e84679bbfcb310a83f91ce7772234df3fcbb41f2f595a40/userdata/shm major:0 minor:904 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb/userdata/shm major:0 minor:863 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8a55e200b06071852324dd5becc03353e4f62598f3846b794dbf08621f93e39/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8a55e200b06071852324dd5becc03353e4f62598f3846b794dbf08621f93e39/userdata/shm major:0 minor:468 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8b057f2132ff258b6f72db6a015d3a5562051b7f885529a6871d5a5d46fff27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8b057f2132ff258b6f72db6a015d3a5562051b7f885529a6871d5a5d46fff27/userdata/shm major:0 minor:709 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm major:0 minor:149 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4152c7de869df80f0c905cfd7a6252eb8e9e684fe6b9642981a93d71e896532/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4152c7de869df80f0c905cfd7a6252eb8e9e684fe6b9642981a93d71e896532/userdata/shm major:0 minor:1031 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950/userdata/shm major:0 minor:526 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d/userdata/shm major:0 minor:417 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0128982b-01b4-49cb-ab4a-8759b844c86b/volumes/kubernetes.io~projected/kube-api-access-b2s4f:{mountpoint:/var/lib/kubelet/pods/0128982b-01b4-49cb-ab4a-8759b844c86b/volumes/kubernetes.io~projected/kube-api-access-b2s4f major:0 minor:817 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/031016de-897e-42bc-9de4-843122f64a75/volumes/kubernetes.io~projected/kube-api-access-sbml7:{mountpoint:/var/lib/kubelet/pods/031016de-897e-42bc-9de4-843122f64a75/volumes/kubernetes.io~projected/kube-api-access-sbml7 major:0 minor:704 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n:{mountpoint:/var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf:{mountpoint:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~secret/metrics-tls major:0 minor:511 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~projected/kube-api-access-hpgsw:{mountpoint:/var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~projected/kube-api-access-hpgsw major:0 minor:1030 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:1029 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~projected/kube-api-access-8hlwn:{mountpoint:/var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~projected/kube-api-access-8hlwn major:0 minor:841 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:840 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~projected/kube-api-access-xhmk8:{mountpoint:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~projected/kube-api-access-xhmk8 major:0 minor:848 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cert major:0 minor:847 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:846 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~projected/kube-api-access-cjpkc:{mountpoint:/var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~projected/kube-api-access-cjpkc major:0 minor:743 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~secret/serving-cert major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h:{mountpoint:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:506 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97:{mountpoint:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2:{mountpoint:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29908b4a-0df5-4c46-b886-c968976c25fb/volumes/kubernetes.io~projected/kube-api-access-dbzwh:{mountpoint:/var/lib/kubelet/pods/29908b4a-0df5-4c46-b886-c968976c25fb/volumes/kubernetes.io~projected/kube-api-access-dbzwh major:0 minor:819 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/tmp major:0 minor:688 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~projected/kube-api-access-4r4jv:{mountpoint:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~projected/kube-api-access-4r4jv major:0 minor:684 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~projected/kube-api-access-d8cx9:{mountpoint:/var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~projected/kube-api-access-d8cx9 major:0 minor:700 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~secret/metrics-tls major:0 minor:708 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx:{mountpoint:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t:{mountpoint:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~projected/kube-api-access-zhl9t:{mountpoint:/var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~projected/kube-api-access-zhl9t major:0 minor:862 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~secret/cert major:0 minor:861 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~projected/kube-api-access-24gm8:{mountpoint:/var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~projected/kube-api-access-24gm8 major:0 minor:820 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf:{mountpoint:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~secret/webhook-certs major:0 minor:513 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~projected/kube-api-access-265wg:{mountpoint:/var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~projected/kube-api-access-265wg major:0 minor:797 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e6bc033-cd90-4704-b03a-8e9c6c0d3904/volumes/kubernetes.io~projected/kube-api-access-2tgmq:{mountpoint:/var/lib/kubelet/pods/4e6bc033-cd90-4704-b03a-8e9c6c0d3904/volumes/kubernetes.io~projected/kube-api-access-2tgmq major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~projected/kube-api-access-8l6fp:{mountpoint:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~projected/kube-api-access-8l6fp major:0 minor:843 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/webhook-cert major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c:{mountpoint:/var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~projected/kube-api-access-mwdtv:{mountpoint:/var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~projected/kube-api-access-mwdtv major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~secret/serving-cert major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~projected/kube-api-access-6r8s7:{mountpoint:/var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~projected/kube-api-access-6r8s7 major:0 minor:429 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~secret/signing-key major:0 minor:426 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7:{mountpoint:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7 major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:77 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr:{mountpoint:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~projected/kube-api-access-ts56d:{mountpoint:/var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~projected/kube-api-access-ts56d major:0 minor:821 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l:{mountpoint:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c3f9dc5-d10d-452c-bf5d-c5830a444617/volumes/kubernetes.io~projected/kube-api-access-65tqd:{mountpoint:/var/lib/kubelet/pods/9c3f9dc5-d10d-452c-bf5d-c5830a444617/volumes/kubernetes.io~projected/kube-api-access-65tqd major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f:{mountpoint:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:502 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:510 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r:{mountpoint:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae5c9120-c38d-46c0-af43-9275563b1a67/volumes/kubernetes.io~projected/kube-api-access-8f6sq:{mountpoint:/var/lib/kubelet/pods/ae5c9120-c38d-46c0-af43-9275563b1a67/volumes/kubernetes.io~projected/kube-api-access-8f6sq major:0 minor:421 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b48d5b87-189b-45b6-ba55-37bd22d59eb6/volumes/kubernetes.io~projected/kube-api-access-nj957:{mountpoint:/var/lib/kubelet/pods/b48d5b87-189b-45b6-ba55-37bd22d59eb6/volumes/kubernetes.io~projected/kube-api-access-nj957 major:0 minor:1056 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d:{mountpoint:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~projected/kube-api-access-8c4jr:{mountpoint:/var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~projected/kube-api-access-8c4jr major:0 minor:729 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~secret/serving-cert major:0 minor:498 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht:{mountpoint:/var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/ca-certs major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/kube-api-access-rqsvs:{mountpoint:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/kube-api-access-rqsvs major:0 minor:475 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:538 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~projected/kube-api-access-xlpqn:{mountpoint:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~projected/kube-api-access-xlpqn major:0 minor:581 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/encryption-config major:0 minor:578 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/etcd-client major:0 minor:579 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/serving-cert major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl:{mountpoint:/var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/ca-certs major:0 minor:466 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/kube-api-access-9l49w:{mountpoint:/var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/kube-api-access-9l49w major:0 minor:467 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~projected/kube-api-access-rbl2g:{mountpoint:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~projected/kube-api-access-rbl2g major:0 minor:493 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/encryption-config major:0 minor:492 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/etcd-client major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/serving-cert major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q:{mountpoint:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~projected/kube-api-access-kpbtg:{mountpoint:/var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~projected/kube-api-access-kpbtg major:0 minor:191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~secret/proxy-tls major:0 minor:679 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv:{mountpoint:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/srv-cert major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0c7587b-eea6-4d98-b39d-3a0feba4035d/volumes/kubernetes.io~projected/kube-api-access-q2cgc:{mountpoint:/var/lib/kubelet/pods/d0c7587b-eea6-4d98-b39d-3a0feba4035d/volumes/kubernetes.io~projected/kube-api-access-q2cgc major:0 minor:340 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~projected/kube-api-access-5zs2l:{mountpoint:/var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~projected/kube-api-access-5zs2l major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:372 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~projected/kube-api-access-2857n:{mountpoint:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~projected/kube-api-access-2857n major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:723 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/srv-cert major:0 minor:724 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p:{mountpoint:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:503 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b Feb 23 13:06:46.701148 master-0 kubenswrapper[17411]: /volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f:{mountpoint:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:505 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb:{mountpoint:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~secret/metrics-certs major:0 minor:512 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2:{mountpoint:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2 major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:504 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg:{mountpoint:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~projected/kube-api-access-xdqd6:{mountpoint:/var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~projected/kube-api-access-xdqd6 major:0 minor:869 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:868 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:509 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~secret/serving-cert major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9 major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:143 fsType:tmpfs blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/142e81d4dba992dc7d8a5f29fbcaf3504dd2f45bde7c15eec72e4ed520521905/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/2ca814320e0d184d6f0d2e5135db3f4add39bcbf2b3308ac796ac4912e3dc8d0/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-1007:{mountpoint:/var/lib/containers/storage/overlay/b7067062e9466bfe422c83c2cad940024fb509e4f5e8a8bbc21b85022aa06ae4/merged major:0 minor:1007 fsType:overlay blockSize:0} overlay_0-1011:{mountpoint:/var/lib/containers/storage/overlay/1cc7d5672cb02439f7c1b64a2ba0cf1e48717e676371e9b40da8e86af6afb33d/merged major:0 minor:1011 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/eaec3fa7b549d085042c44ed7575928bdb25d1c07a7d73cceb9d49b07bfb0ed2/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1021:{mountpoint:/var/lib/containers/storage/overlay/eba04fe5b42e275bc68c16edfda97c95382d9fa778474dde9f9c34d13e06575d/merged major:0 minor:1021 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/8b64c6762c78dd0362c6da9064a26f84eda1de319d0cfc32c1fe21811ac4d24f/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1025:{mountpoint:/var/lib/containers/storage/overlay/0644a0cd13a2cc2ac3e0f71be8b947fc4e5f08980325c8599a61e3ea8265074c/merged major:0 minor:1025 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/8da176e70118deb70cacc3eb4c9534fecb83cf8811a74c9e90fbc06c5f4f72c8/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1033:{mountpoint:/var/lib/containers/storage/overlay/ad01f372b1df4c0894671e22f9a440c089f3fefd34037a9c8041210ac3dc6a5f/merged major:0 minor:1033 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/88849b0196fb87f1d7090bfcdb49ca7b44e410cdae73e642a46adf4462d54c17/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1039:{mountpoint:/var/lib/containers/storage/overlay/66e1fa14e5457693f93bd446843497d75d8a2ff2caca313888f5a2a563924dfa/merged major:0 minor:1039 fsType:overlay blockSize:0} overlay_0-104:{mountpoint:/var/lib/containers/storage/overlay/302d18badbe483a85ca9a39afc42d059b218f4243311eb88be3cedfca6108381/merged major:0 minor:104 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/d6f46d5d1b0cb77db4e76aa77b358cdc5b0cd0aa61aa40432e6653f26fee042f/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/f7827b3b11e56bb13f049b4a67a8cda6819de50e999a1cc003b1936e8ddba8cb/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/5a2585c7e39f401092a5f370e8b75f1db79c0efff6c4027308d1b0128c47219d/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/2d71da43ab947f27ab3e956723930028d03a0dbb7c1fcaf20ad2250aedf9cbea/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1059:{mountpoint:/var/lib/containers/storage/overlay/69c14ca2d3876a9f5dede1fa372e31500cd800ccb9b3da0628021487a041d84e/merged major:0 minor:1059 fsType:overlay blockSize:0} overlay_0-1061:{mountpoint:/var/lib/containers/storage/overlay/2b37cd7670c9752e0b9df48976d3dfeae2f2f205232019a4662f9cc9a17e3eae/merged major:0 minor:1061 fsType:overlay blockSize:0} overlay_0-1065:{mountpoint:/var/lib/containers/storage/overlay/695a19ce3c092c3e59dad61f8cd1519abe07e00eebb9ee253021ea2847f1062f/merged major:0 minor:1065 fsType:overlay blockSize:0} overlay_0-1071:{mountpoint:/var/lib/containers/storage/overlay/1edafbf405c99644039554d7a1ac6c16e6689ed5bbc8084a08bf8f1e56a12bd4/merged major:0 minor:1071 fsType:overlay blockSize:0} overlay_0-1083:{mountpoint:/var/lib/containers/storage/overlay/d5466e5cb3acf90f27ff216b2683caf0467ab347493fe85ad143d747ce617e4b/merged major:0 minor:1083 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/0f111b6188848e42258d030d5c821b753c1543f987fa429c04aa49fc9e45a6a1/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/eda31bd64232f2fc513027432fe3f1a02c61461496cd24560719892bd08b0ea8/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3504739513dcef5ea5f997f7f54490b36abdb180c144d1b2eb9f1a5ae49127bd/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/39b5a986db28fbc83ade6171bebca27200498397b78e7b995db5d9fb68ca124e/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/0891d64411acd125ddfe8b29ee3bef2b8c89be6c1f269587072c5210dfe4b588/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/452faedbc71aa9aa1ae120544b931e9aa8fc0a8296a7547cea2f1bcb33372479/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/980f3ff70005fb6070fdbddcade5273b565a9758e0dd5b0c55943f27c73252f7/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/2b332784aade244fe5c1f6d676924b1fbb243e1942b7407169a866b9856106d0/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/5cebfd03d4baf76b0d45f3b0cec14b6512d7e092b591ad99d8f80688d9cee1b2/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/7c93767e921d5ddd69cffe13aa5a765b69129d0d97b6c43d4c75feccb6623271/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/23e3f8a797cf2694217d59bf7f1c99e0c180911040e89633e3af32abf4b315c9/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/98d9d67f97c4a23061b9e78102cad14235b6e99706d583d5c32fa72b5afb497a/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/df0fba76b984c37e21199b12db39a7a49a598237a62b2a3564663df70a129289/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/36624997f9e68be643f40a9407e2f49305b3e1d23b019ff1721e9ff87c4f4ebf/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/af016d2c244495ff1a47195db819f82c459adc9449f3eef8624495189743b219/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/3e865b4082a97cff1107f297a6bf94d930326aae892b8f7e5a4fd8bee5f59e24/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/40024e92907f505ee77eefe0a75f53b88f3cc191e0b1f793db2301afd4ebc63f/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/53fe9b890199404f1365a28adaf7f737d7d253ee31601701e80ee9e49dab06b6/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/08bdf389964dbf16b8de2fd3de1cef20ff004f247c505f60dc456349aaebb057/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/d335a4a6a86ff74d32deced58a2e37af5ee84d0e6965d6b97af01cbb03818085/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/e558aab6f31335fb08087dbc0b803eebce78407bd85db135b6df5ddb0b6b724d/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/38b17c5834591f05605d6e4479efe5dc5f1a61c7f96325202978dfb9b3a87ef1/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/dbe629524759c44e92ea3a0eb393fdea943195b42a9b527f77795ad6eac093da/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/d020e6db7545c9e09d8537e75210aca175fea8f8dd2d9870dab1f9eb37eeec24/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/2981482c785f654cb46b89464066bb10bf200adaea801f0530738bf3bcbc6aa2/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/5ae9c13ece4b281cd44c72e89f7c0a476a3ff29cbca16bdac1f50b579588c4ef/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/daa126a250dab4486e2aca4aa536e1d97d8675671a4c916938943ad319a585dd/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/4562d0526c5194c2286b0dd7f39a9ed2b9ac16f0be5c61f4f73742416b18a0b5/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/c02391bf4c3de60fc984b1c3ba0fa01ea026da623efcea80344fd9b05e935a82/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/60296e84b93496a350277233723b3a33fdb4668bb64e0ab00836e3e289b0f3f9/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/5bb5a39562496f577579acbfe906d8dbe922d603889b1adb607f6ae750df1b54/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/2af52d69e3324a8e9de0b6ade0b90d6865371a365a94388ea9debaed024dddd2/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/2241ec4f756f66804fb2802d5a0aa5149a76b7c6aff618417e9de3739c46052f/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/928e04d2aa24ca9b02266368a93c11cfc9b882abbc7b1d2cda2e87de7b9f47ed/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/d25943883b226108ade83c9adffe24518113b5e008e1c6bba4320299016ebeab/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/7eefaa3d195cc623f2f0da340718a94e0b7a0244bf2d48bb9bd5b09970bae89c/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/fb6457e5332b8048ae16a26f33bdd956ff6feee3358308fa69f310a0ae488557/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/d732509ee1ec684a1f911fef39b66850b713f0059c4bd72f73b2798140cf9d3e/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/eda685bc9e178f5a33850260ceeff5476ef0230a80d3ba113c8f90ae338ed01f/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/4e533a3703ca905378dffac2adf814dc16e36b7433dbf7032caca9e72166894e/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/a2d839845640c3b8162b00503bc3c0047e098035b45a53b6051a2f4dbd03a3c7/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/85da1dce87e2079aa686b83039237224c0e2a541b53207221be6938e66e9b2f3/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/71687e22427353e73fdefc3a7dbd50dced583f0d5a0443e452608f974f329272/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/a75671efa102ab095222608c92bd1dd2ff24d782e094b398c1f05a983af27ba9/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/6f50e1269c12b5236b99d71cf9130c6c63b509098778de52dda895284dd6954b/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/af8ca27a6bafc52fd603e7dcb8c98564bfb634c9bee4490547790005f39564b2/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/cbbeb6d12d2c961c9ff6e417f55653ed299531d248b086d1ddcf6d6572257cb5/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/2ae37388824912215afd33bdcd42a68dccea3b0aae6bf5f79a53c5435cca7d10/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/77a8723ccdda8a83d5e9ba9601e2c88345cb1527930f741647c77f1fab1c5fee/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/5fc9b8e538b24ed69a072908f78a8891aff4b3fd05bb1a758603ac6e3784cb4f/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/2ddeec3e78ac7ab3c62f1b30fc483c53151dc095ac118498a9a62692c78eb28f/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/c339b902343a52bfc8e7446a1319e295cd1d29075cbb91d55f3fc9232978d023/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/fc958466ca57a9059e54da1e618fb600b26c2e129e1a9e5f8e3b27d9411cf027/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/8ed8477cf0c5f1f243965588a1350acc0eedab0e55ed73631fa7a232f3781903/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/99df03fe88e2d85d9e9b6beb70439c1b0315f34333d2c6cb8e6e578cf4528416/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/acf595c654590f3541e8126585bca7e4dd63565f6602474192036a8b4766425e/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/4d3c410832bb40aa0a77b3e6376de318ce6dc4b91ec5680447d8b8a79274cff2/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/1342b2f83f1183a62914f8561842247bfaafe60c21756c0c78173f41e994c8e6/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/2861fd8a91a6fb936c60ff4f5e69eb5e2e11790ef71da6435b7ba831be5fb0db/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/deadf1285559bc819d508fb861d197641673d267ea2a987faee23c9b3321bffe/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/9d6bd87dc95bad9b907a58558bf76af5e5c62fcfbf7bd5bffc0f6caf3ad87d31/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/bb64f3ef538cd3338b1f4a926238f8c478ef5ba59128cb051789172d3fd99b20/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/c838b71999289cc0e53abe693bd13a538dc2cfcd47d85f666c444360e23dfd9a/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/f836c7ad84975c351e247058cd9d1fe945e9445397ea7f4d0b1346e46dac5391/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/025bc00372937b697c3674f629cea760c5ef8d073c17492f22ff0c88830065a5/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/963a3e840b54f0e22ce1a0153d980906482621e0d1725c0b1db7434fd39b1f8b/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/4f8832ce686713b221075b1332222e11604c2d4f284c765906f7119901b3dfa2/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/ead655cf6a895edf388a437b0ef3f9497bf6b8d4ac0f21f7279e1d06156df283/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/c1814fb7945adf2f594aef70de4c97dfd0ae259aa673fc52b5ce7b19748faa39/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/38ad5f412d73019430188b6084c6cdb687ca1f6af2a9b18911cadd1a198ebe12/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/56167c0ba5c10e2be35819f7e0077e975c7f868cfe56d8ea9f12b575da14d0fd/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/f6d1678822fdf14b5d6c4d205fec839286f130568638a0ee9fed4acf0c6deab4/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/92a63ceacfed4fdbeb1fa751e2bbcf1cefee53337ade7adabf43a59dd13edaef/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/277126b3e6f03d18979c6f6d9a8eae1a7f7ad0721ecf1e66720af1f2f1827166/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/6f849ea77d93c74337e72655278ce4cd89a4d35ff34667218393f8b99e068689/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/8e8587aa4e16c51791b5d8e0e90f0c74a8e51b3b607d3e57af32b3f5ee267dc6/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/405a1ae525bfc9bf67ef3341ea8b98630f7629fd261c1fc7e237dc289248ae50/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/a5a6ee382e2d7c3066fc42c4cad504cbbe05cb0ff5ea58306a82812575ee4da9/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/75b628f667e39a13df08680a71d6997c0d1ba5bedd8cb9c69c5513a5b79edbe5/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/9b8d79fa6992e1112ff0b7d38cf12975bb8f855a1ff829013dc3266aacf03c8f/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/1bbd7650ad0e951b4df5266c9261cc27c3a19bf7c534ef501add2371fabcf18f/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/1388ae1a33268aa6fb7393bb23bd23edddeee024511b61a81398efbf73c96e47/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/a55297cdf13582458606dc5da518b2058536abe00189da402bc38aef6b952ffb/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/06b2c3bc27f5f7ab92b1ae92b8d4129d40053b3596a8c03e471046cbd12d3971/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/1efc850fbc4f7c2d61baeb0315066f1cdaf842da03ed9c682e265a04ee08771e/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/9e88aa9efa2244dba9ac6441d1a5fb2d0dee7e3ae1a52e0ceea324273598df10/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/b82e49b5eba50b40d32d096380d8f309dafa185dde6c9d72f053cac8411b35c3/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/69746daf79dab7a8faca7c015540723926eba1b56f11b342a6dacb4e6cbf0e63/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/e15e494861502bf8e1c462c5bc7b5bcee5cbc9d9374fe0d3dea6cf4009bd0c2e/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/ac16f81dddcfdf609052abdeb6ee845d32210f56efc26237ba50a756ee95d7d6/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/88b8d8f5eafa62c6ca20787793df4fe2a476d9980586d04202148fffb3ee7121/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/b57f9452d10e2e52b71c3baa8492f0ca148a3ceb475d1e6266e114604907304c/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/a4cd671aff7dc649043dd4ef47eb0dd7fde36b308c5d732c70e67de2b34baae8/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/e8f564b23c98d1aaa09c73478c8d4c2e7eaf415b6cff1f97a12b6fb4f3b1d5bd/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/0cc1bec32408f2f564cb0c7c1898fb90395d0de48d2c10d26590e35774c7024e/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/294b0ace8561fb65386632c12f9042cb7ed91b1f12caa3c7567de096778d2889/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/1c1d334ad1d35b874380d914a8e9d1bd77874a7a7f75ea9d6f518c7d4a9ec9a2/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-495:{mountpoint:/var/lib/containers/storage/overlay/825473ed8dd09e2f140ce8d2f67ad9d8ff45de31668def35f35d6906cfc6f2d6/merged major:0 minor:495 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/6d5a354013b0866198e69851f4014fd3274e4f101a3f7785a83eb74435eff20d/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/f308041162736faa361aae5162cbcf177317e5135fd2a15cceeb925d5af940ee/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/6d31ae31d8e449db9c85e08ec97148c909fd992f2ad90daa44da160c5aa4e6cf/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/fd9019f75983bd665399581b3ca1262231d04e717129c1058eabc3ddf231567b/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/9f796da68e33dc7accfdf42a62590cf0d475ad34837fa9c006a8197c9d8ef460/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storage/overlay/06407a8dfbbde8225ce71e44eb9a3b1ed1afd1d9d9072d3ea9f79391288fc1b9/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/791b33bce67be87560b8c88bd7563e05b9a23285ad099bbdee5e154f2c468c86/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/bf5621071bc4c3df2e5b3b325eeba352e76f089bf73ec4302ba71a79a43e6889/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-549:{mountpoint:/var/lib/containers/storage/overlay/223731d134a35c04933db758bcd76b213193c7d5c22417e471a79545dce41158/merged major:0 minor:549 fsType:overlay blockSize:0} overlay_0-551:{mountpoint:/var/lib/containers/storage/overlay/1123b8ab7df2a86ea461837a09689085ff26108e42f95bfc57205cca95a3df82/merged major:0 minor:551 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/6e4c6b42832b7b2f530ea711b939da75519b70693266b23c7a4bc93f9dcd4dd5/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-555:{mountpoint:/var/lib/containers/storage/overlay/a0da22e7b90920c134090b8ed81e85e97f59a0523b494ef6c0421f25e8d864c9/merged major:0 minor:555 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/41d2ea25e53a9793f737430318a7120418c5aaf4c16a0622cf47ce00911e2a5e/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-559:{mountpoint:/var/lib/containers/storage/overlay/480dac0016aed7f959e7aafff119d2a04100ed9df947d72c81992ceda5b34ee0/merged major:0 minor:559 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/6d618c7ebc3597fb21ad1c4771d7a56d1c60ff9bcaf78c51647e8b244110d3b9/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-561:{mountpoint:/var/lib/containers/storage/overlay/133913153a7039e259d2171dcfffc887f006d271f5ebe3d291b4752983749255/merged major:0 minor:561 fsType:overlay blockSize:0} overlay_0-563:{mountpoint:/var/lib/containers/storage/overlay/fbab3c95f9770814c897f4046ddeafef93c1c1c5f398167dcdb39e0607bf6f9c/merged major:0 minor:563 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/0cbb8c9681a76693a5ed688db680535c799a1558c8fe401262705cd08a80c8fa/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/700c7681373210dc7e1a09fdff087f389a92a9e52f86b0ca2c8c2854551b2d8a/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/e11344e2f07dd1e8e62cc226b9deaf9fe21bfd8c2b5518286e83009031208eca/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/67a58f1fe28ebdecc201536135d51681d531c35d148d4689852ab095cf52a96a/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-593:{mountpoint:/var/lib/containers/storage/overlay/0c3b09e9bc5d144e5e9c1ea8fc94510f3bb37151b5894bbd41b3dc3088edd8fd/merged major:0 minor:593 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/03766063273bbd5e632b6fe4c00e8fd35a63f5f5bdca93712e30e1ebea98c47c/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/001ede082872725dd69243025fea7abc3b8fd950d9f88fd922fc1950546da817/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/4b0e6828e992cf31eebb8895b1b5793fc57554b6c2d988771c9084058dd708fd/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/536f83b6e8b99d092a2c4004db4dcfbe39186d366dce02e565d58396b89a34df/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/9e4feeffb80b4eab33a6dd36adda920e900927b27f68a37be5c52ef9a6da92eb/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/dca3e0fd554bc0847676ea17d90c55126899b4737aacead3984687bd00f11c90/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/fdb8d3b913b3bb0b6b667d59e50d158a840ac0c056963140d6a6880f00fdbfd9/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/6bd001abfee0a3fed7a9027d83317641caaf6c384432cd41949f309b0f82d949/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/770ff0d7ef95ef13c1aad2ac011914599a6429195f6f736932e36dc119b8ba1b/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/d967d745c74c3a649150e3b2c6f3dec0a9fb2ae11b48f58947f5daabc8076ac1/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/a7dd26fe0f955a3565ffc40d6f75d1823f4a244990795968fe2c214933d79f77/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-627:{mountpoint:/var/lib/containers/storage/overlay/57a3a0635646593bceaca3e54e7f9f4be117e1fc32ad13a081cbbff5a7f1d961/merged major:0 minor:627 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/26026b77fb2a8805d246daabb9173f083d8a9c9addc071ad332ef69e1229717e/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/7a1bc46eb11f1e4fa9c377f2a0fbbaec24c222281db86ab3f0b4ca9584a2e6be/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-65:{mountpoint:/var/lib/containers/storage/overlay/3463e5c9795bf4c87eafffd7cf537c74bbfa8506f3e69b26912a6540f1b4762c/merged major:0 minor:65 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/9977496bb5f3fc1948cf3a96555ee9400167db859592a7934e707476d0d433a9/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-668:{mountpoint:/var/lib/containers/storage/overlay/b80b1cd54f7d27bbe0c6a8ff1d21cca456bdb47a584165c2048777b00e0e58ce/merged major:0 minor:668 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/62f34001a181dd81ff6ec2365e9b682396d48f734de76d4addcee33bcee86570/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/7271576ecbb8d3296e6a2649c3e47567e1fc53385a1ada6d94a2fe30dbc2b5e9/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/d73f84788e9e7a1cdf0b513552357fc54441923478a52e8b846fb26bd1965409/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/f83c976bef77378538f6036e361adbb2626b286c3c32fcfe28c0870ddb5f3af8/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/f15265aec42a1ef5184e0b7834082dda7b20092995b714388b1e9f0eb68c09d0/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-698:{mountpoint:/var/lib/containers/storage/overlay/71636e552640975bd07fbb21b134dc528917575af7d7035a0e1806f1ff835f00/merged major:0 minor:698 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/9967a44112199a3de50a9254768084ea2b54c985dab43221e5106bbd85b029d2/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/3f7a04b108046bcddc484843528b50109c82a6bafa7db250cc428bb6bd5fe317/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-715:{mountpoint:/var/lib/containers/storage/overlay/68a6549f6ba2e2472b7e9f392e9c93c004e3d56b124aac8f750c2d6678af72b9/merged major:0 minor:715 fsType:overlay blockSize:0} overlay_0-717:{mountpoint:/var/lib/containers/storage/overlay/e1e322dc20546ff77d1707df2ca4273bfaf8a5a2cd17e27ee2240b9dd7739633/merged major:0 minor:717 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/095f904ad5676284a0e35bab2b05fc8595059031037ed672a222bc5973021cb1/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/44ee6eacbb8a0a484a1de6af4d7e5878beb96e396bc5e248a8bc549431982e2d/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/9c0023aafc19ab081c8bc87b3ad8fd3a94a095db226781d6c25f90a7400badc5/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-740:{mountpoint:/var/lib/containers/storage/overlay/c45ce1ecf20ee8e273baf7a509473380c3216e103c5978a2152b5c18ca54a102/merged major:0 minor:740 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/7c1f7e649ffd823284b35f50569715193ab922117e96ee899cdf39f07ae460e2/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-749:{mountpoint:/var/lib/containers/storage/overlay/fdb300d8296edb4216e697ff05b18f717a41f8e069fec84d7e257efe48e126ae/merged major:0 minor:749 fsType:overlay blockSize:0} overlay_0-762:{mountpoint:/var/lib/containers/storage/overlay/a88fc6e5613e4da516bf67f0416ebb5b86b184b2eed51033bf011cac467e2273/merged major:0 minor:762 fsType:overlay blockSize:0} overlay_0-764:{mountpoint:/var/lib/containers/storage/overlay/ca82cd37e763bc7c6ff6cbad1d436d771a57be2c07860ad0b682451d143e7e02/merged major:0 minor:764 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/5e0799f656a3931307334bfab395e76bb7e87f51781efc4c465ca53904dcc2f8/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/1a4f2c987c9a45f1fc7e4d12c9cae28190c3daa6d4628111a4f3a33a2824977f/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/5aef3b1822260f40f92915dab4ba8585b71c5e9f438d6f9cf6a607cb851efa81/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/dd987487bc9f004af1c600e03374d40ca011c8053e11293030379601929d1b50/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/d2538b7a76f8250f50756da88d2a835e7de1aa18482e773e918501c2f397caf0/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-791:{mountpoint:/var/lib/containers/storage/overlay/fe52054af5231b2899e7a8e94cf8f16bb0bec4b9f627fe6b10de2df7b6418163/merged major:0 minor:791 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/177b2bf6d6d5a1cdc695f6bfc7c1bd66357294f6fcfb561fe24ba4a74e1693f8/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/90120418d9dbdeb92ff036cf8c51247ad59b562739fc81a355e8e6be413579da/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/09f2a580b52b4cf9938dc11ec9b31afae52aa2fb728230c7794f23f171af48fb/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/cd72f4b1d9f38bc256714642150d30b41080764c940e83e5501ad9c014ccbcc0/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-833:{mountpoint:/var/lib/containers/storage/overlay/e092371eea012a2fc5875af317a03aefc587a011a7fa61861adf1e175f488bda/merged major:0 minor:833 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/e91e6e1ae20d36a0b2f3426e76203043a84514d1eb3f8202b03546dc8aaa32ba/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/d3f957588510f70219acb55ad35d567cc58112c35e765aa04bf122738f5e7cf9/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-845:{mountpoint:/var/lib/containers/storage/overlay/a8368ba6387f8ba3a65b2d4e4972791d6cf1c360d51220e41e66669481b3c01e/merged major:0 minor:845 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/562e7437d583502d5bb47843653683f20d7f4d6b065d1ba7faaaeaae3ebc6038/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/91ee90e84ca1f7de527c0c1617377de219801bccc0091b08d8a2f918b0fd41bd/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/9216b00ec87ec6e5b999b54c372998536bcd14711a31a11f2bc0bf33baede408/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/f494ddefc21ace23ace3f6ae23fa1452191f4fc5a04ff09afa715346abc9850e/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/9aae9ac84700bba6619f8539fc933a25181383ccbf417b8b4703e74569190734/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/7e62958eaf9d8d549e1d35637f0028f09463ca448e7cf64012029095dc920bfc/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/2a2db260a7c05e59840db258c1599730a4f37bad48fc1ee3762c225a6d0c887c/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/81440b474336e732baa141d246280379e6fc9b79836827b7fbced8d9c0b9aca6/merged major:0 minor:887 fsType:overlay blockSize:0} overlay_0-889:{mountpoint:/var/lib/containers/storage/overlay/c5f165aff20645c23c958b37715253e77c5159c06c872d2a216b2a9660fc762b/merged major:0 minor:889 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/df7d15b77f98253fcd107ea430c71db48c72aced9f332c412ecbbb83eb6d1459/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/272078c82444500ffb03c2eb3f0848ee7b2f87697b0824641645239028218069/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/b9e5a28c151d34ecb76fe63d54e6c5de1b97dd026deff8fa6241a4ad2e747a73/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-909:{mountpoint:/var/lib/containers/storage/overlay/be941a5ba699b956255235f46e9d8c4728ca79550088422b02a8d253c8610de2/merged major:0 minor:909 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/c889348a7c21509bf133f0aa71701b2f9ca5546397cf1a549075d77bf0602dd9/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-925:{mountpoint:/var/lib/containers/storage/overlay/57d40427d029bf76e049fcb080f14f63e2ad41c82c65252956245832c1a13948/merged major:0 minor:925 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/1adcf83e799817778704fc633aa227aff01748ab6039510fe302c64d005c96e7/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/8806f56e6ee2e46e5c3f717cbfb16a52a315510bd22cd81d810ad932296edea1/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-939:{mountpoint:/var/lib/containers/storage/overlay/6660d8561c98062aa9ed993aad14e006025992bdbfe529f67ddd756a23005a0f/merged major:0 minor:939 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/00b7eaa12c11b4639c84a4e703a065f393f4cac207cf000b065f4b9855f08a3d/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/d5c6b1333197f8619331ba8a604091e71c7c8fb8f864e547776e406076b9f6a9/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/de1733f8a94340ad0e022dcb1e5825463a9270a0dc05303f98bdea7790158208/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-957:{mountpoint:/var/lib/containers/storage/overlay/b75ba5afbe20d6115114a2026f535df14699234545ce2a45113c58621b04fe92/merged major:0 minor:957 fsType:overlay blockSize:0} overlay_0-959:{mountpoint:/var/lib/containers/storage/overlay/4d84d62feee7a2fd3c526a52821167e4edbce1df9b67681893cb198b4a6feb9b/merged major:0 minor:959 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/7ddf235ea877b620307dc73c76ed4cf1a4265da4504011d42cac1de6a41ebcbb/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-974:{mountpoint:/var/lib/containers/storage/overlay/808eb9ed38e72de66dec370e44325a95f72568be404fb3e0ebbbfaa13b82b209/merged major:0 minor:974 fsType:overlay blockSize:0} overlay_0-975:{mountpoint:/var/lib/containers/storage/overlay/f050842ac5cf4f791374b512417537721831ebb393a4b225edf40d69d5334496/merged major:0 minor:975 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/a1c3e836f52a94761c4ef49f972100f2357f745630edcd1160ea6cb9d0197bae/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-979:{mountpoint:/var/lib/containers/storage/overlay/64dbcb9bc56b775cf870f6afdb13999713328afaef9fe44d38aafd0e5b97caad/merged major:0 minor:979 fsType:overlay blockSize:0} overlay_0-981:{mountpoint:/var/lib/containers/storage/overlay/8c6c3c03f011166c66674b43c70883776d525dbaf5326ec871593fceddca135f/merged major:0 minor:981 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/18181d392f4da9d2986f48a7e7014a6d496cf9f646d4652c2720b0123c658ff6/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-985:{mountpoint:/var/lib/containers/storage/overlay/7b7235cbcab619ed93c55a720c3a8282a336aff362386708cb809c45e6fc9ca8/merged major:0 minor:985 fsType:overlay blockSize:0} overlay_0-986:{mountpoint:/var/lib/containers/storage/overlay/574dc2ad03f6dc86190f60e5ccf19840102ba295ec03971b76bdf06d1e6a71bb/merged major:0 minor:986 fsType:overlay blockSize:0} overlay_0-991:{mountpoint:/var/lib/containers/storage/overlay/e5b2d71b48f3bfafdbabc89241a14f34299f22e01caf0db8cf4ada00d38792e0/merged major:0 minor:991 fsType:overlay blockSize:0} overlay_0-994:{mountpoint:/var/lib/containers/storage/overlay/0c6a594b20c0613ab5db4357e03cf5ad90365f10663f8c8f3d5cdaab94f96611/merged major:0 minor:994 fsType:overlay blockSize:0}] Feb 23 13:06:46.764595 master-0 kubenswrapper[17411]: I0223 13:06:46.762685 17411 manager.go:217] Machine: {Timestamp:2026-02-23 13:06:46.761189947 +0000 UTC m=+0.188696584 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1f5e0293a13e4ebabb9c281fe953e842 SystemUUID:1f5e0293-a13e-4eba-bb9c-281fe953e842 BootID:08350faf-787c-4da6-a444-e23ed90f1388 Filesystems:[{Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-764 DeviceMajor:0 DeviceMinor:764 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~projected/kube-api-access-24gm8 DeviceMajor:0 DeviceMinor:820 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d0c7587b-eea6-4d98-b39d-3a0feba4035d/volumes/kubernetes.io~projected/kube-api-access-q2cgc DeviceMajor:0 DeviceMinor:340 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f81b2dd369e93dc40f927baca8dae686df59bd8a564f1ae9d88f270b6628811d/userdata/shm DeviceMajor:0 DeviceMinor:417 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:504 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:688 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fecd2bc8223ea55048ff254cc1da63a7ab6b31fd457d9272751880294076f65/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-627 DeviceMajor:0 DeviceMinor:627 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff4d0be1e1784bbea67828ca324e5f5b249ae15e9f46dff8848a9e4b264b1f9a/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:508 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~projected/kube-api-access-slw4h DeviceMajor:0 DeviceMinor:242 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:847 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:773 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:266 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-994 DeviceMajor:0 DeviceMinor:994 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~projected/kube-api-access-hpgsw DeviceMajor:0 DeviceMinor:1030 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:498 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:861 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f4b505810756bc1aacbada86c7f39ac25a9943e5236452d1fe977e3b589b653/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~projected/kube-api-access-cjpkc DeviceMajor:0 DeviceMinor:743 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-909 DeviceMajor:0 DeviceMinor:909 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~projected/kube-api-access-ts56d DeviceMajor:0 DeviceMinor:821 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-985 DeviceMajor:0 DeviceMinor:985 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:426 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:166 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4220039c33efb83321a003be7571a3649fc8e65f3d945873306ea0af077401f3/userdata/shm DeviceMajor:0 DeviceMinor:531 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3dcb59345b5bc0117b6a00f1149c42a48da8235be304949c4a08edf500dfc736/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~projected/kube-api-access-fppk7 DeviceMajor:0 DeviceMinor:111 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~projected/kube-api-access-7v7b9 DeviceMajor:0 DeviceMinor:148 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1071 DeviceMajor:0 DeviceMinor:1071 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:723 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:258 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:512 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~projected/kube-api-access-8l6fp DeviceMajor:0 DeviceMinor:843 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-791 DeviceMajor:0 DeviceMinor:791 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-715 DeviceMajor:0 DeviceMinor:715 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c46e007ea8dbe14a7d36fc217c695f92a860be1997c49493f763a50d92a0aea/userdata/shm DeviceMajor:0 DeviceMinor:499 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-981 DeviceMajor:0 DeviceMinor:981 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1011 DeviceMajor:0 DeviceMinor:1011 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:578 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65b5e7cfe708cd0b56472acd737e9226322c906b31eea544d5610d0aba35343f/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae5c9120-c38d-46c0-af43-9275563b1a67/volumes/kubernetes.io~projected/kube-api-access-8f6sq DeviceMajor:0 DeviceMinor:421 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~projected/kube-api-access-fsp9d DeviceMajor:0 DeviceMinor:147 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ab71705-d574-4f95-b3fc-9f7cf5e8a557/volumes/kubernetes.io~projected/kube-api-access-rrhrx DeviceMajor:0 DeviceMinor:260 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3d85c030-4931-42d7-afd6-72b41789aea8/volumes/kubernetes.io~projected/kube-api-access-zhl9t DeviceMajor:0 DeviceMinor:862 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/85958edf-e3da-4704-8f09-cf049101f2e6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:77 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5011e8950499afd85717ca70ff2f77337ae409cf405b4306b6e9ccdd5b46be9c/userdata/shm DeviceMajor:0 DeviceMinor:534 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b/userdata/shm DeviceMajor:0 DeviceMinor:751 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d953c37-1b74-4ce5-89cb-b3f53454fc57/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:506 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:509 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49a6b189f8fbf9c0aa7bb66aa47a22331a8f42d58ff77972bbb9f47a339fc2a5/userdata/shm DeviceMajor:0 DeviceMinor:965 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1083 DeviceMajor:0 DeviceMinor:1083 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6098dfd89bcd8aca6a463063a3944c75855225a89ecc7de08ce7be93098f2f35/userdata/shm DeviceMajor:0 DeviceMinor:1057 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~projected/kube-api-access-lmw9r DeviceMajor:0 DeviceMinor:255 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950/userdata/shm DeviceMajor:0 DeviceMinor:526 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c3f9dc5-d10d-452c-bf5d-c5830a444617/volumes/kubernetes.io~projected/kube-api-access-65tqd DeviceMajor:0 DeviceMinor:1045 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1033 DeviceMajor:0 DeviceMinor:1033 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:510 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-974 DeviceMajor:0 DeviceMinor:974 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:868 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:490 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1021 DeviceMajor:0 DeviceMinor:1021 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~projected/kube-api-access-xjthf DeviceMajor:0 DeviceMinor:272 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~projected/kube-api-access-kdnn5 DeviceMajor:0 DeviceMinor:267 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:580 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e4a89c63867c66249f3be8d13ff9c7bfaab9b37c45169bdf97b3f2b62ddd38e/userdata/shm DeviceMajor:0 DeviceMinor:88 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc22782-a369-48aa-a0e8-c1c63ffa3053/volumes/kubernetes.io~projected/kube-api-access-265wg DeviceMajor:0 DeviceMinor:797 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~projected/kube-api-access-mwdtv DeviceMajor:0 DeviceMinor:735 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:689 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~projected/kube-api-access-nmv5f DeviceMajor:0 DeviceMinor:241 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae5797327ba541f955d9212090aad83a203cfcaad025e64f727a371889902b1b/userdata/shm DeviceMajor:0 DeviceMinor:514 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34ad2537-b5fe-463f-8e95-f47cc886aa5e/volumes/kubernetes.io~projected/kube-api-access-4r4jv DeviceMajor:0 DeviceMinor:684 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:708 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f2f8ec2305a812ab189524192ed5bf86a7bba7a6b18ab8873a325d48aca12f0/userdata/shm DeviceMajor:0 DeviceMinor:711 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7fbab55-8405-44f4-ae2a-412c115ce411/volumes/kubernetes.io~projected/kube-api-access-lwphb DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4152c7de869df80f0c905cfd7a6252eb8e9e684fe6b9642981a93d71e896532/userdata/shm DeviceMajor:0 DeviceMinor:1031 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-957 DeviceMajor:0 DeviceMinor:957 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8b057f2132ff258b6f72db6a015d3a5562051b7f885529a6871d5a5d46fff27/userdata/shm DeviceMajor:0 DeviceMinor:709 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/031016de-897e-42bc-9de4-843122f64a75/volumes/kubernetes.io~projected/kube-api-access-sbml7 DeviceMajor:0 DeviceMinor:704 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:846 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f88d6ed3-c0a6-4eef-b80c-417994cf69b0/volumes/kubernetes.io~projected/kube-api-access-xdqd6 DeviceMajor:0 DeviceMinor:869 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~projected/kube-api-access-tvr7p DeviceMajor:0 DeviceMinor:239 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0138fc447fbdee86ffbe815a7ddaa8ef72faf5cdfc02ebf5b12e2363a575ee0/userdata/shm DeviceMajor:0 DeviceMinor:947 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef601f2e27644089bb89c3773b71863aebd556568df59bb7ed37c9da1b824997/userdata/shm DeviceMajor:0 DeviceMinor:149 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d82f223-e28b-4917-8513-3ca5c6e9bff7/volumes/kubernetes.io~projected/kube-api-access-crt2t DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-555 DeviceMajor:0 DeviceMinor:555 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084/userdata/shm DeviceMajor:0 DeviceMinor:757 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-762 DeviceMajor:0 DeviceMinor:762 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c34c0686c926bdae121a0eedb681349d3da6cf0bf3d0236efb47c671f55f2bfa/userdata/shm DeviceMajor:0 DeviceMinor:967 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-104 DeviceMajor:0 DeviceMinor:104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714/volumes/kubernetes.io~projected/kube-api-access-d8cx9 DeviceMajor:0 DeviceMinor:700 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1065 DeviceMajor:0 DeviceMinor:1065 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-979 DeviceMajor:0 DeviceMinor:979 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7eebc0d49b7c567b48cd5eefc8e53ef5d1ed0561b20f604d85eb5c27c39b44c1/userdata/shm DeviceMajor:0 DeviceMinor:543 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b7585f9f-12e5-451b-beeb-db43ae778f25/volumes/kubernetes.io~projected/kube-api-access-qfrht DeviceMajor:0 DeviceMinor:279 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:816 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/65ddfc68-2612-42b6-ad11-6fe44f1cff60/volumes/kubernetes.io~projected/kube-api-access-8jg7c DeviceMajor:0 DeviceMinor:130 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:466 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:259 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5a186719c5336b48d37cc198d7b066ec48103dfdc1d217163ebf123ed0ab417/userdata/shm DeviceMajor:0 DeviceMinor:343 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8a55e200b06071852324dd5becc03353e4f62598f3846b794dbf08621f93e39/userdata/shm DeviceMajor:0 DeviceMinor:468 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/24dab1bc-cf56-429b-93ce-911970c41b5c/volumes/kubernetes.io~projected/kube-api-access-q7h97 DeviceMajor:0 DeviceMinor:278 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3379914a728662133497da67617919926a093f183dd51d51d102580cd6dc439c/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-65 DeviceMajor:0 DeviceMinor:65 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-939 DeviceMajor:0 DeviceMinor:939 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/25b5540c-da7d-4b6f-a15f-394451f4674e/volumes/kubernetes.io~projected/kube-api-access-2csk2 DeviceMajor:0 DeviceMinor:240 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/048f4455-d99a-407b-8674-60efc7aa6ecb/volumes/kubernetes.io~projected/kube-api-access-plz5n DeviceMajor:0 DeviceMinor:282 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/kube-api-access-rqsvs DeviceMajor:0 DeviceMinor:475 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~projected/kube-api-access-5zs2l DeviceMajor:0 DeviceMinor:373 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:492 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a6a40ec2d8a01ea18fd8cf1b6cf2eaa1958e8d00567ecf3d9242ffd4f0f40b7/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33cac62afbdb0955b81a34c275e7dcd7f9a70a4c06dc059893f1ad4906b2e19a/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:491 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/16898873-740b-4b85-99cf-d25a28d4ab00/volumes/kubernetes.io~projected/kube-api-access-xhmk8 DeviceMajor:0 DeviceMinor:848 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e863839c35f3d76c23dbc06dbedd4d1482a212122b16325b611cacabea8825bb/userdata/shm DeviceMajor:0 DeviceMinor:863 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-561 DeviceMajor:0 DeviceMinor:561 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/691aedbd28a747f226bebdd350428eca31ef9a07fa5127fd9ae499bd323b6128/userdata/shm DeviceMajor:0 DeviceMinor:1100 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-925 DeviceMajor:0 DeviceMinor:925 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b0568f1af714331492afb936eff9364e4e1b161e76a0c02477b4d75a1981323/userdata/shm DeviceMajor:0 DeviceMinor:518 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e402396c861028ad44b45bca58dd0a4df2309cc7110b7c0eb008ea09d7318bee/userdata/shm DeviceMajor:0 DeviceMinor:442 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:112 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e39861f7eba3a69549695ea713f86bb313f7b6a9495d969cd59f6af1de1fb17/userdata/shm DeviceMajor:0 DeviceMinor:835 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~projected/kube-api-access-2857n DeviceMajor:0 DeviceMinor:736 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bfb63245da0778f51b7093310ac46aa7faa9d649b159ea6bf34847612b9c785a/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~projected/kube-api-access-r8l8f DeviceMajor:0 DeviceMinor:263 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6052e687d5a0ce780ee931cc7745ee82029f77a28ee3b7f8c2e4558bd684d9be/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dcd03d6e-4c8c-400a-8001-343aaeeca93b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:505 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11bfb3ba69318ac82e6a17119971c7970b30aa29f2137edc2b60951ffab2514d/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/08577c3c-73d8-47f4-ba30-aec11af51d40/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:511 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-717 DeviceMajor:0 DeviceMinor:717 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/929cd0d2afd60c7d9f544041dba457a14033d12033f2175e4ed353ff5c86ad87/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a6904138e757c983258da9d68a265caa1653a1f12aa6dce24570b08bc55548c/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4344b3d3f6b6142165c0129c787b17654ed07ce21ae9e2393257e14099cdbbe9/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/da5d5997-e45f-4858-a9a9-e880bc222caf/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:503 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes/kubernetes.io~projected/kube-api-access-8c4jr DeviceMajor:0 DeviceMinor:729 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~projected/kube-api-access-tz9fr DeviceMajor:0 DeviceMinor:257 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0128982b-01b4-49cb-ab4a-8759b844c86b/volumes/kubernetes.io~projected/kube-api-access-b2s4f DeviceMajor:0 DeviceMinor:817 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0b622d2ce727cdb988e6f2262823c6404b1690f9ace5d0d0a58996f9054295b9/userdata/shm DeviceMajor:0 DeviceMinor:423 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~projected/kube-api-access-8hlwn DeviceMajor:0 DeviceMinor:841 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:143 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:474 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c159d5f4-5c95-4600-80ec-a17a419cfd7a/volumes/kubernetes.io~projected/kube-api-access-rbl2g DeviceMajor:0 DeviceMinor:493 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d32952be-0fe3-431f-aa8f-6a35159fa845/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:372 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a80d5ac-27ce-4ba9-809e-28c86b80163b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-986 DeviceMajor:0 DeviceMinor:986 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2aa19e4d5644a53e8e4d1cac2c7eaac4c6b6bb82c8eb4f73291e6662560a35fe/userdata/shm DeviceMajor:0 DeviceMinor:521 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~projected/kube-api-access-xlpqn DeviceMajor:0 DeviceMinor:581 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-740 DeviceMajor:0 DeviceMinor:740 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45f23e7a0d31d2c3d126aa0253e052ced5690e8352ab68bf6cd5ecb2feb526ad/userdata/shm DeviceMajor:0 DeviceMinor:963 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:507 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~projected/kube-api-access-p4h6l DeviceMajor:0 DeviceMinor:281 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1039 DeviceMajor:0 DeviceMinor:1039 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d91fa6bb-0c88-4930-884a-67e840d58a9f/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:724 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:840 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bed3da5536171867bf64480ad5077cc20f7948c0a8fbe4ad2cdb5e228228b281/userdata/shm DeviceMajor:0 DeviceMinor:1046 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b1970ec8-620e-4529-bf3b-1cf9a52c27d3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:264 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-549 DeviceMajor:0 DeviceMinor:549 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1799b6-85b0-4aed-8835-35cb3d8d1109/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf51deb148d0a54f145674839e6a7092757223a01e6702931c3433cd1423df77/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3169cece10dce28604f06b8d9b8e0bfd22fff61c163e615108b41fa4a47fa62f/userdata/shm DeviceMajor:0 DeviceMinor:961 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0520301-1a6b-49ca-acca-011692d5b784/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:579 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-495 DeviceMajor:0 DeviceMinor:495 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cbcca259-0dbf-48ca-bf90-eec638dcdd10/volumes/kubernetes.io~projected/kube-api-access-nhgkv DeviceMajor:0 DeviceMinor:277 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ca54e90d031d4b06a1f1151c70b2313b71c3d29fc664753f5b38e9c79f228b5/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c53d80ed25b572fb20c52dbbef5afc868d8833485719d8f236d81dddeb0a25e/userdata/shm DeviceMajor:0 DeviceMinor:152 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-977 D Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: eviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-593 DeviceMajor:0 DeviceMinor:593 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1061 DeviceMajor:0 DeviceMinor:1061 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:513 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/29908b4a-0df5-4c46-b886-c968976c25fb/volumes/kubernetes.io~projected/kube-api-access-dbzwh DeviceMajor:0 DeviceMinor:819 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1007 DeviceMajor:0 DeviceMinor:1007 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1025 DeviceMajor:0 DeviceMinor:1025 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-833 DeviceMajor:0 DeviceMinor:833 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:728 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:679 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes/kubernetes.io~projected/kube-api-access-jccjf DeviceMajor:0 DeviceMinor:265 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-551 DeviceMajor:0 DeviceMinor:551 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0602a01933c19c27331c4869229405bde10812971f78fe4544f70f84182ff9cb/userdata/shm DeviceMajor:0 DeviceMinor:57 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-975 DeviceMajor:0 DeviceMinor:975 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-559 DeviceMajor:0 DeviceMinor:559 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70ccda5f-ca1a-4fce-b77f-a1132f85635a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:734 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/430cb782-18d5-4429-99ef-29d3dca0d803/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:813 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7989d68762e9c6f9e5c7905f7cd33057aeb2e18691fc86fd3f8d2ea5eb1f1940/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2b80534-3c9d-4ddb-9215-d50d63294c7c/volumes/kubernetes.io~projected/kube-api-access-l4j2q DeviceMajor:0 DeviceMinor:262 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a3dfb271-a659-45e0-b51d-5e99ec43b555/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:502 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2559444a55923be36b04d2b835f4fe9aa5657c0c673a3c0e61ca4df7a3e4fa7e/userdata/shm DeviceMajor:0 DeviceMinor:519 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-845 DeviceMajor:0 DeviceMinor:845 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0f9f46b3a67457561213f46c0dde489fd5b7ad386b82e3ac02c2cf683cbbb34b/userdata/shm DeviceMajor:0 DeviceMinor:527 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31830e0362f7a4961ccb5574999c9b322d54b8a46c9d7f20c64fbd33df71f3a4/userdata/shm DeviceMajor:0 DeviceMinor:585 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4c51b25-f013-4f5c-acbd-598350468192/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/99399ebb-c95f-4663-b3b6-f5dfabf47fcf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8422896f1ec2ab46d73c67a22baefed99a0b0d0ea311d5d1f05da3156542ea9/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/623b2142d274970e84b3bbba2aa8e77e527e6d06e0243078dfae6d82495ba0a1/userdata/shm DeviceMajor:0 DeviceMinor:852 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ee436961-c305-4c84-b4f9-175e1d8004fb/volumes/kubernetes.io~projected/kube-api-access-ngvd2 DeviceMajor:0 DeviceMinor:280 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-698 DeviceMajor:0 DeviceMinor:698 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-749 DeviceMajor:0 DeviceMinor:749 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92134e9eac995bc624b7c976d7f3c271d22473d1a0968a654d73191099e3ca2d/userdata/shm DeviceMajor:0 DeviceMinor:620 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e0c3eebcdc0a49021edd14002068e329a47b402595863d157041ee099c56c4c/userdata/shm DeviceMajor:0 DeviceMinor:515 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6114492191186efcd3545eb575590b7cd16391b8a4aad43b239f5268bdf89f2/userdata/shm DeviceMajor:0 DeviceMinor:798 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e5215076a24da7b39e84679bbfcb310a83f91ce7772234df3fcbb41f2f595a40/userdata/shm DeviceMajor:0 DeviceMinor:904 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0b59f2a-7014-448c-9d3b-e38281f07dbc/volumes/kubernetes.io~projected/kube-api-access-nt9nl DeviceMajor:0 DeviceMinor:110 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bfbb4d6d-7047-48cb-be03-97a57fc688e3/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:538 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-563 DeviceMajor:0 DeviceMinor:563 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54411ade-3383-48aa-ba10-62ffb40185b9/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:815 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/497bca4205af77adc08934bfd388b5dd2d51e7baefd035ff75a921ff155d6636/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/71a07622-3038-4b8c-b6bb-5f28a4115012/volumes/kubernetes.io~projected/kube-api-access-6r8s7 DeviceMajor:0 DeviceMinor:429 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aaa06fef5e54a39c410b76a0809563d32afa3bde2278654961bb3dcb6c8acd54/userdata/shm DeviceMajor:0 DeviceMinor:657 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1059 DeviceMajor:0 DeviceMinor:1059 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d7283ee-8959-44b6-83fb-b152510485eb/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:1029 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4e6bc033-cd90-4704-b03a-8e9c6c0d3904/volumes/kubernetes.io~projected/kube-api-access-2tgmq DeviceMajor:0 DeviceMinor:415 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-668 DeviceMajor:0 DeviceMinor:668 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/18938fa68af909af787dbe379ca80b17c407618308de01749e7e7cd98cd799e3/userdata/shm DeviceMajor:0 DeviceMinor:529 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c33f208a-e158-47e2-83d5-ac792bf3a1d5/volumes/kubernetes.io~projected/kube-api-access-kpbtg DeviceMajor:0 DeviceMinor:191 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-991 DeviceMajor:0 DeviceMinor:991 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/volumes/kubernetes.io~projected/kube-api-access-gr6rg DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b48d5b87-189b-45b6-ba55-37bd22d59eb6/volumes/kubernetes.io~projected/kube-api-access-nj957 DeviceMajor:0 DeviceMinor:1056 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-889 DeviceMajor:0 DeviceMinor:889 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8db940c1-82ba-4b6e-8137-059e26ab1ced/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:814 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-959 DeviceMajor:0 DeviceMinor:959 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f6d694443d15e509d2263248bb6a8e17f31192cc5c7a28777a4b53f833c71072/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0d6008c-6e09-4e61-83a5-60456ca90e1e/volumes/kubernetes.io~projected/kube-api-access-9l49w DeviceMajor:0 DeviceMinor:467 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0602a01933c19c2 MacAddress:8e:30:45:0c:5a:4e Speed:10000 Mtu:8900} {Name:0b622d2ce727cdb MacAddress:32:c4:23:65:a2:91 Speed:10000 Mtu:8900} {Name:0f9f46b3a674575 MacAddress:8a:94:bb:fa:e8:c8 Speed:10000 Mtu:8900} {Name:0fecd2bc8223ea5 MacAddress:72:7d:a4:23:4b:a9 Speed:10000 Mtu:8900} {Name:11bfb3ba69318ac MacAddress:26:1c:02:ac:2d:bf Speed:10000 Mtu:8900} {Name:18938fa68af909a MacAddress:9a:a6:32:9f:80:ce Speed:10000 Mtu:8900} {Name:1e0c3eebcdc0a49 MacAddress:1a:22:89:1d:d7:5d Speed:10000 Mtu:8900} {Name:1e39861f7eba3a6 MacAddress:42:26:65:2e:a4:a3 Speed:10000 Mtu:8900} {Name:2559444a55923be MacAddress:ae:98:c1:6e:fd:33 Speed:10000 Mtu:8900} {Name:2aa19e4d5644a53 MacAddress:5a:2b:05:0b:33:39 Speed:10000 Mtu:8900} {Name:3169cece10dce28 MacAddress:ba:51:03:8e:5c:67 Speed:10000 Mtu:8900} {Name:31830e0362f7a49 MacAddress:5a:ad:fb:76:0e:97 Speed:10000 Mtu:8900} {Name:3379914a7286621 MacAddress:9e:96:4a:23:71:9a Speed:10000 Mtu:8900} {Name:33cac62afbdb095 MacAddress:be:07:d1:f7:07:6e Speed:10000 Mtu:8900} {Name:3c46e007ea8dbe1 MacAddress:66:21:36:67:bf:4c Speed:10000 Mtu:8900} {Name:4220039c33efb83 MacAddress:82:31:fc:1c:e6:ef Speed:10000 Mtu:8900} {Name:4344b3d3f6b6142 MacAddress:5a:af:a8:09:99:9d Speed:10000 Mtu:8900} {Name:45f23e7a0d31d2c MacAddress:c6:fd:0a:bc:72:a1 Speed:10000 Mtu:8900} {Name:497bca4205af77a MacAddress:ae:03:47:bf:d1:73 Speed:10000 Mtu:8900} {Name:49a6b189f8fbf9c MacAddress:fe:e6:e1:43:85:12 Speed:10000 Mtu:8900} {Name:5011e8950499afd MacAddress:76:bc:ce:77:7c:d7 Speed:10000 Mtu:8900} {Name:5ca54e90d031d4b MacAddress:0e:41:4c:2f:65:c3 Speed:10000 Mtu:8900} {Name:6052e687d5a0ce7 MacAddress:b6:2f:08:f9:3d:9c Speed:10000 Mtu:8900} {Name:6098dfd89bcd8ac MacAddress:92:46:1b:fa:39:84 Speed:10000 Mtu:8900} {Name:623b2142d274970 MacAddress:92:58:77:40:be:84 Speed:10000 Mtu:8900} {Name:691aedbd28a747f MacAddress:2a:86:58:37:85:54 Speed:10000 Mtu:8900} {Name:6a6904138e757c9 MacAddress:3e:51:ee:a7:97:d5 Speed:10000 Mtu:8900} {Name:7989d68762e9c6f MacAddress:66:d1:60:b1:11:f3 Speed:10000 Mtu:8900} {Name:7eebc0d49b7c567 MacAddress:9a:07:0f:10:cd:61 Speed:10000 Mtu:8900} {Name:92134e9eac995bc MacAddress:1a:23:b4:c1:98:25 Speed:10000 Mtu:8900} {Name:9933c3953079b9e MacAddress:3e:65:01:21:60:4f Speed:10000 Mtu:8900} {Name:9f4b505810756bc MacAddress:ae:90:ac:11:45:47 Speed:10000 Mtu:8900} {Name:a8422896f1ec2ab MacAddress:7e:13:cc:b3:8c:0d Speed:10000 Mtu:8900} {Name:aaa06fef5e54a39 MacAddress:6a:66:e6:be:ed:76 Speed:10000 Mtu:8900} {Name:ae5797327ba541f MacAddress:c2:d9:f8:cc:5c:0a Speed:10000 Mtu:8900} {Name:b279587ff3b533f MacAddress:ee:e8:af:a4:27:22 Speed:10000 Mtu:8900} {Name:b6114492191186e MacAddress:52:f8:3d:56:86:8b Speed:10000 Mtu:8900} {Name:bed3da553617186 MacAddress:fa:76:2b:e8:a2:40 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:66:cc:a9:b3:d5:47 Speed:0 Mtu:8900} {Name:c34c0686c926bda MacAddress:4e:89:5a:c6:8a:2a Speed:10000 Mtu:8900} {Name:c5a186719c5336b MacAddress:ea:2b:ff:f4:40:fa Speed:10000 Mtu:8900} {Name:cf51deb148d0a54 MacAddress:32:1f:92:3c:80:a2 Speed:10000 Mtu:8900} {Name:e402396c861028a MacAddress:06:38:a1:9f:d3:ca Speed:10000 Mtu:8900} {Name:e863839c35f3d76 MacAddress:ca:f1:fd:01:3d:97 Speed:10000 Mtu:8900} {Name:e8a55e200b06071 MacAddress:3a:03:90:8e:8f:73 Speed:10000 Mtu:8900} {Name:e8b057f2132ff25 MacAddress:da:9f:53:47:ec:60 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fe:58:4c Speed:-1 Mtu:9000} {Name:f67140661bca80f MacAddress:de:63:8e:67:d2:68 Speed:10000 Mtu:8900} {Name:f81b2dd369e93dc MacAddress:c6:4e:2b:e5:7d:70 Speed:10000 Mtu:8900} {Name:ff4d0be1e1784bb MacAddress:5a:ca:81:0e:72:e4 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:42:b9:27:f4:5e:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.764527 17411 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.764634 17411 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.764923 17411 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765100 17411 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765143 17411 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765666 17411 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765680 17411 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765690 17411 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765717 17411 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765765 17411 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765875 17411 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 13:06:46.765922 master-0 kubenswrapper[17411]: I0223 13:06:46.765954 17411 kubelet.go:418] "Attempting to sync node with API server" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.765973 17411 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.765994 17411 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.766009 17411 kubelet.go:324] "Adding apiserver pod source" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.766032 17411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.767487 17411 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 23 13:06:46.767971 master-0 kubenswrapper[17411]: I0223 13:06:46.767972 17411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 13:06:46.768491 master-0 kubenswrapper[17411]: I0223 13:06:46.768401 17411 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 13:06:46.768652 master-0 kubenswrapper[17411]: I0223 13:06:46.768600 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 13:06:46.768652 master-0 kubenswrapper[17411]: I0223 13:06:46.768628 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 13:06:46.768652 master-0 kubenswrapper[17411]: I0223 13:06:46.768648 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 13:06:46.768652 master-0 kubenswrapper[17411]: I0223 13:06:46.768658 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768668 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768678 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768692 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768701 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768716 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768726 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768771 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768793 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 13:06:46.768841 master-0 kubenswrapper[17411]: I0223 13:06:46.768837 17411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 13:06:46.769525 master-0 kubenswrapper[17411]: I0223 13:06:46.769487 17411 server.go:1280] "Started kubelet" Feb 23 13:06:46.772083 master-0 kubenswrapper[17411]: I0223 13:06:46.771864 17411 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 13:06:46.774172 master-0 kubenswrapper[17411]: I0223 13:06:46.772506 17411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 13:06:46.774172 master-0 kubenswrapper[17411]: I0223 13:06:46.772668 17411 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 23 13:06:46.774172 master-0 kubenswrapper[17411]: I0223 13:06:46.773494 17411 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 13:06:46.782757 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 23 13:06:46.783800 master-0 kubenswrapper[17411]: I0223 13:06:46.783745 17411 server.go:449] "Adding debug handlers to kubelet server" Feb 23 13:06:46.789828 master-0 kubenswrapper[17411]: I0223 13:06:46.789649 17411 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:06:46.790387 master-0 kubenswrapper[17411]: I0223 13:06:46.790297 17411 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:06:46.802949 master-0 kubenswrapper[17411]: E0223 13:06:46.802877 17411 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 13:06:46.817375 master-0 kubenswrapper[17411]: I0223 13:06:46.817298 17411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817450 17411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817508 17411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 12:50:52 +0000 UTC, rotation deadline is 2026-02-24 09:48:12.713290653 +0000 UTC Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817564 17411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h41m25.895729222s for next certificate rotation Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817579 17411 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817632 17411 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 13:06:46.818381 master-0 kubenswrapper[17411]: I0223 13:06:46.817991 17411 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819046 17411 factory.go:55] Registering systemd factory Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819086 17411 factory.go:221] Registration of the systemd container factory successfully Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819426 17411 factory.go:153] Registering CRI-O factory Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819442 17411 factory.go:221] Registration of the crio container factory successfully Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819545 17411 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819570 17411 factory.go:103] Registering Raw factory Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.819593 17411 manager.go:1196] Started watching for new ooms in manager Feb 23 13:06:46.820472 master-0 kubenswrapper[17411]: I0223 13:06:46.820163 17411 manager.go:319] Starting recovery of all containers Feb 23 13:06:46.823725 master-0 kubenswrapper[17411]: I0223 13:06:46.823653 17411 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:06:46.835926 master-0 kubenswrapper[17411]: I0223 13:06:46.835709 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn" seLinuxMountContext="" Feb 23 13:06:46.836084 master-0 kubenswrapper[17411]: I0223 13:06:46.835967 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7fbab55-8405-44f4-ae2a-412c115ce411" volumeName="kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs" seLinuxMountContext="" Feb 23 13:06:46.836084 master-0 kubenswrapper[17411]: I0223 13:06:46.836033 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16898873-740b-4b85-99cf-d25a28d4ab00" volumeName="kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert" seLinuxMountContext="" Feb 23 13:06:46.836322 master-0 kubenswrapper[17411]: I0223 13:06:46.836093 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44b07d33-6e84-434e-9a14-431846620968" volumeName="kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs" seLinuxMountContext="" Feb 23 13:06:46.836494 master-0 kubenswrapper[17411]: I0223 13:06:46.836436 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" volumeName="kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.836602 master-0 kubenswrapper[17411]: I0223 13:06:46.836554 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.837105 master-0 kubenswrapper[17411]: I0223 13:06:46.836654 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config" seLinuxMountContext="" Feb 23 13:06:46.837695 master-0 kubenswrapper[17411]: I0223 13:06:46.837600 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85958edf-e3da-4704-8f09-cf049101f2e6" volumeName="kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7" seLinuxMountContext="" Feb 23 13:06:46.837791 master-0 kubenswrapper[17411]: I0223 13:06:46.837740 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f" seLinuxMountContext="" Feb 23 13:06:46.837837 master-0 kubenswrapper[17411]: I0223 13:06:46.837790 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client" seLinuxMountContext="" Feb 23 13:06:46.837940 master-0 kubenswrapper[17411]: I0223 13:06:46.837856 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert" seLinuxMountContext="" Feb 23 13:06:46.838786 master-0 kubenswrapper[17411]: I0223 13:06:46.838697 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70ccda5f-ca1a-4fce-b77f-a1132f85635a" volumeName="kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.838848 master-0 kubenswrapper[17411]: I0223 13:06:46.838797 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d953c37-1b74-4ce5-89cb-b3f53454fc57" volumeName="kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 13:06:46.838889 master-0 kubenswrapper[17411]: I0223 13:06:46.838846 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430cb782-18d5-4429-99ef-29d3dca0d803" volumeName="kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config" seLinuxMountContext="" Feb 23 13:06:46.838932 master-0 kubenswrapper[17411]: I0223 13:06:46.838881 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430cb782-18d5-4429-99ef-29d3dca0d803" volumeName="kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8" seLinuxMountContext="" Feb 23 13:06:46.838932 master-0 kubenswrapper[17411]: I0223 13:06:46.838916 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54411ade-3383-48aa-ba10-62ffb40185b9" volumeName="kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp" seLinuxMountContext="" Feb 23 13:06:46.839007 master-0 kubenswrapper[17411]: I0223 13:06:46.838947 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.839007 master-0 kubenswrapper[17411]: I0223 13:06:46.838978 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.839076 master-0 kubenswrapper[17411]: I0223 13:06:46.839008 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config" seLinuxMountContext="" Feb 23 13:06:46.839076 master-0 kubenswrapper[17411]: I0223 13:06:46.839040 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16898873-740b-4b85-99cf-d25a28d4ab00" volumeName="kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8" seLinuxMountContext="" Feb 23 13:06:46.839076 master-0 kubenswrapper[17411]: I0223 13:06:46.839069 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.839188 master-0 kubenswrapper[17411]: I0223 13:06:46.839097 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config" seLinuxMountContext="" Feb 23 13:06:46.839188 master-0 kubenswrapper[17411]: I0223 13:06:46.839127 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29908b4a-0df5-4c46-b886-c968976c25fb" volumeName="kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh" seLinuxMountContext="" Feb 23 13:06:46.839188 master-0 kubenswrapper[17411]: I0223 13:06:46.839155 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34ad2537-b5fe-463f-8e95-f47cc886aa5e" volumeName="kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp" seLinuxMountContext="" Feb 23 13:06:46.839331 master-0 kubenswrapper[17411]: I0223 13:06:46.839186 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70ccda5f-ca1a-4fce-b77f-a1132f85635a" volumeName="kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv" seLinuxMountContext="" Feb 23 13:06:46.839331 master-0 kubenswrapper[17411]: I0223 13:06:46.839216 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca" seLinuxMountContext="" Feb 23 13:06:46.839408 master-0 kubenswrapper[17411]: I0223 13:06:46.839299 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.839408 master-0 kubenswrapper[17411]: I0223 13:06:46.839385 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" volumeName="kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc" seLinuxMountContext="" Feb 23 13:06:46.839476 master-0 kubenswrapper[17411]: I0223 13:06:46.839416 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b48d5b87-189b-45b6-ba55-37bd22d59eb6" volumeName="kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content" seLinuxMountContext="" Feb 23 13:06:46.839476 master-0 kubenswrapper[17411]: I0223 13:06:46.839447 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides" seLinuxMountContext="" Feb 23 13:06:46.839549 master-0 kubenswrapper[17411]: I0223 13:06:46.839477 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" volumeName="kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.839549 master-0 kubenswrapper[17411]: I0223 13:06:46.839508 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39ae352f-b9e3-4bbc-b59b-9fa92c7bc714" volumeName="kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls" seLinuxMountContext="" Feb 23 13:06:46.839549 master-0 kubenswrapper[17411]: I0223 13:06:46.839538 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy" seLinuxMountContext="" Feb 23 13:06:46.839663 master-0 kubenswrapper[17411]: I0223 13:06:46.839560 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d91fa6bb-0c88-4930-884a-67e840d58a9f" volumeName="kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n" seLinuxMountContext="" Feb 23 13:06:46.839663 master-0 kubenswrapper[17411]: I0223 13:06:46.839585 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token" seLinuxMountContext="" Feb 23 13:06:46.839663 master-0 kubenswrapper[17411]: I0223 13:06:46.839606 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0128982b-01b4-49cb-ab4a-8759b844c86b" volumeName="kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content" seLinuxMountContext="" Feb 23 13:06:46.839663 master-0 kubenswrapper[17411]: I0223 13:06:46.839628 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client" seLinuxMountContext="" Feb 23 13:06:46.839663 master-0 kubenswrapper[17411]: I0223 13:06:46.839653 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839677 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839699 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab" volumeName="kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839721 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39ae352f-b9e3-4bbc-b59b-9fa92c7bc714" volumeName="kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839744 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d953c37-1b74-4ce5-89cb-b3f53454fc57" volumeName="kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839772 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29908b4a-0df5-4c46-b886-c968976c25fb" volumeName="kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839803 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d85c030-4931-42d7-afd6-72b41789aea8" volumeName="kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config" seLinuxMountContext="" Feb 23 13:06:46.839836 master-0 kubenswrapper[17411]: I0223 13:06:46.839832 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839860 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85958edf-e3da-4704-8f09-cf049101f2e6" volumeName="kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839882 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db940c1-82ba-4b6e-8137-059e26ab1ced" volumeName="kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839904 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0128982b-01b4-49cb-ab4a-8759b844c86b" volumeName="kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839929 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16898873-740b-4b85-99cf-d25a28d4ab00" volumeName="kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839956 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.839987 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cbcca259-0dbf-48ca-bf90-eec638dcdd10" volumeName="kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.840016 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da5d5997-e45f-4858-a9a9-e880bc222caf" volumeName="kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.840083 master-0 kubenswrapper[17411]: I0223 13:06:46.840060 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib" seLinuxMountContext="" Feb 23 13:06:46.840438 master-0 kubenswrapper[17411]: I0223 13:06:46.840093 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca" seLinuxMountContext="" Feb 23 13:06:46.840438 master-0 kubenswrapper[17411]: I0223 13:06:46.840125 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="048f4455-d99a-407b-8674-60efc7aa6ecb" volumeName="kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n" seLinuxMountContext="" Feb 23 13:06:46.840438 master-0 kubenswrapper[17411]: I0223 13:06:46.840159 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c3f9dc5-d10d-452c-bf5d-c5830a444617" volumeName="kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content" seLinuxMountContext="" Feb 23 13:06:46.840438 master-0 kubenswrapper[17411]: I0223 13:06:46.840190 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides" seLinuxMountContext="" Feb 23 13:06:46.840584 master-0 kubenswrapper[17411]: I0223 13:06:46.840467 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" volumeName="kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config" seLinuxMountContext="" Feb 23 13:06:46.840584 master-0 kubenswrapper[17411]: I0223 13:06:46.840568 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33f208a-e158-47e2-83d5-ac792bf3a1d5" volumeName="kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg" seLinuxMountContext="" Feb 23 13:06:46.840661 master-0 kubenswrapper[17411]: I0223 13:06:46.840599 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cbcca259-0dbf-48ca-bf90-eec638dcdd10" volumeName="kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert" seLinuxMountContext="" Feb 23 13:06:46.840661 master-0 kubenswrapper[17411]: I0223 13:06:46.840625 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d91fa6bb-0c88-4930-884a-67e840d58a9f" volumeName="kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert" seLinuxMountContext="" Feb 23 13:06:46.840735 master-0 kubenswrapper[17411]: I0223 13:06:46.840647 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5" seLinuxMountContext="" Feb 23 13:06:46.840781 master-0 kubenswrapper[17411]: I0223 13:06:46.840742 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7283ee-8959-44b6-83fb-b152510485eb" volumeName="kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw" seLinuxMountContext="" Feb 23 13:06:46.840781 master-0 kubenswrapper[17411]: I0223 13:06:46.840769 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9" seLinuxMountContext="" Feb 23 13:06:46.840863 master-0 kubenswrapper[17411]: I0223 13:06:46.840790 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:06:46.840863 master-0 kubenswrapper[17411]: I0223 13:06:46.840811 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit" seLinuxMountContext="" Feb 23 13:06:46.840863 master-0 kubenswrapper[17411]: I0223 13:06:46.840832 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0c7587b-eea6-4d98-b39d-3a0feba4035d" volumeName="kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc" seLinuxMountContext="" Feb 23 13:06:46.840863 master-0 kubenswrapper[17411]: I0223 13:06:46.840852 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34ad2537-b5fe-463f-8e95-f47cc886aa5e" volumeName="kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840877 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39ae352f-b9e3-4bbc-b59b-9fa92c7bc714" volumeName="kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840897 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840923 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c3f9dc5-d10d-452c-bf5d-c5830a444617" volumeName="kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840946 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840965 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0d6008c-6e09-4e61-83a5-60456ca90e1e" volumeName="kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w" seLinuxMountContext="" Feb 23 13:06:46.841024 master-0 kubenswrapper[17411]: I0223 13:06:46.840988 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33f208a-e158-47e2-83d5-ac792bf3a1d5" volumeName="kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841007 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" volumeName="kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841067 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16898873-740b-4b85-99cf-d25a28d4ab00" volumeName="kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841087 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" volumeName="kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841106 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54411ade-3383-48aa-ba10-62ffb40185b9" volumeName="kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841133 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841156 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3dfb271-a659-45e0-b51d-5e99ec43b555" volumeName="kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841175 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841195 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d32952be-0fe3-431f-aa8f-6a35159fa845" volumeName="kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841224 master-0 kubenswrapper[17411]: I0223 13:06:46.841216 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841238 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841285 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841315 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841348 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71a07622-3038-4b8c-b6bb-5f28a4115012" volumeName="kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841373 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841393 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841472 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841495 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25b5540c-da7d-4b6f-a15f-394451f4674e" volumeName="kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841518 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70ccda5f-ca1a-4fce-b77f-a1132f85635a" volumeName="kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841542 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbb4d6d-7047-48cb-be03-97a57fc688e3" volumeName="kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841565 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0d6008c-6e09-4e61-83a5-60456ca90e1e" volumeName="kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache" seLinuxMountContext="" Feb 23 13:06:46.841572 master-0 kubenswrapper[17411]: I0223 13:06:46.841585 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0d6008c-6e09-4e61-83a5-60456ca90e1e" volumeName="kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841605 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7283ee-8959-44b6-83fb-b152510485eb" volumeName="kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841625 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae5c9120-c38d-46c0-af43-9275563b1a67" volumeName="kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841646 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841665 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841684 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db940c1-82ba-4b6e-8137-059e26ab1ced" volumeName="kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841704 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b48d5b87-189b-45b6-ba55-37bd22d59eb6" volumeName="kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841725 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" volumeName="kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841744 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841765 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee436961-c305-4c84-b4f9-175e1d8004fb" volumeName="kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841808 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841832 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841854 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbb4d6d-7047-48cb-be03-97a57fc688e3" volumeName="kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841877 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841899 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841918 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d32952be-0fe3-431f-aa8f-6a35159fa845" volumeName="kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841940 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da5d5997-e45f-4858-a9a9-e880bc222caf" volumeName="kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841960 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" volumeName="kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca" seLinuxMountContext="" Feb 23 13:06:46.841983 master-0 kubenswrapper[17411]: I0223 13:06:46.841983 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc22782-a369-48aa-a0e8-c1c63ffa3053" volumeName="kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842005 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842028 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4c51b25-f013-4f5c-acbd-598350468192" volumeName="kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842047 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" volumeName="kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842066 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7fbab55-8405-44f4-ae2a-412c115ce411" volumeName="kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842086 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7283ee-8959-44b6-83fb-b152510485eb" volumeName="kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842107 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70ccda5f-ca1a-4fce-b77f-a1132f85635a" volumeName="kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842128 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54411ade-3383-48aa-ba10-62ffb40185b9" volumeName="kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842152 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" volumeName="kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842177 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db940c1-82ba-4b6e-8137-059e26ab1ced" volumeName="kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842196 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b7585f9f-12e5-451b-beeb-db43ae778f25" volumeName="kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842216 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842235 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33f208a-e158-47e2-83d5-ac792bf3a1d5" volumeName="kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842281 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08577c3c-73d8-47f4-ba30-aec11af51d40" volumeName="kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842309 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d85c030-4931-42d7-afd6-72b41789aea8" volumeName="kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842336 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee436961-c305-4c84-b4f9-175e1d8004fb" volumeName="kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842364 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842383 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842403 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842425 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842446 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" volumeName="kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842466 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" volumeName="kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842484 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dab1bc-cf56-429b-93ce-911970c41b5c" volumeName="kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842504 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842526 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54411ade-3383-48aa-ba10-62ffb40185b9" volumeName="kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842544 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbb4d6d-7047-48cb-be03-97a57fc688e3" volumeName="kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842565 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbb4d6d-7047-48cb-be03-97a57fc688e3" volumeName="kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842584 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842605 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="031016de-897e-42bc-9de4-843122f64a75" volumeName="kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842629 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842647 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" volumeName="kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842666 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db940c1-82ba-4b6e-8137-059e26ab1ced" volumeName="kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config" seLinuxMountContext="" Feb 23 13:06:46.842659 master-0 kubenswrapper[17411]: I0223 13:06:46.842687 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c3f9dc5-d10d-452c-bf5d-c5830a444617" volumeName="kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842709 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" volumeName="kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842729 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" volumeName="kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842749 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842768 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" volumeName="kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842790 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842808 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee436961-c305-4c84-b4f9-175e1d8004fb" volumeName="kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842829 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0128982b-01b4-49cb-ab4a-8759b844c86b" volumeName="kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842851 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a80d5ac-27ce-4ba9-809e-28c86b80163b" volumeName="kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842906 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33f208a-e158-47e2-83d5-ac792bf3a1d5" volumeName="kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842936 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d32952be-0fe3-431f-aa8f-6a35159fa845" volumeName="kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842960 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.842989 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71a07622-3038-4b8c-b6bb-5f28a4115012" volumeName="kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843020 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" volumeName="kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843047 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b48d5b87-189b-45b6-ba55-37bd22d59eb6" volumeName="kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843072 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843098 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843122 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843140 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd03d6e-4c8c-400a-8001-343aaeeca93b" volumeName="kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843164 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" volumeName="kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843191 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34ad2537-b5fe-463f-8e95-f47cc886aa5e" volumeName="kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843215 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b59f2a-7014-448c-9d3b-e38281f07dbc" volumeName="kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843275 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843305 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cbcca259-0dbf-48ca-bf90-eec638dcdd10" volumeName="kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843335 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d85c030-4931-42d7-afd6-72b41789aea8" volumeName="kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843362 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843388 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d82f223-e28b-4917-8513-3ca5c6e9bff7" volumeName="kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843418 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430cb782-18d5-4429-99ef-29d3dca0d803" volumeName="kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843441 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc22782-a369-48aa-a0e8-c1c63ffa3053" volumeName="kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843462 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0520301-1a6b-49ca-acca-011692d5b784" volumeName="kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843480 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c159d5f4-5c95-4600-80ec-a17a419cfd7a" volumeName="kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843498 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2b80534-3c9d-4ddb-9215-d50d63294c7c" volumeName="kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843519 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d7283ee-8959-44b6-83fb-b152510485eb" volumeName="kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843539 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d953c37-1b74-4ce5-89cb-b3f53454fc57" volumeName="kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843557 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430cb782-18d5-4429-99ef-29d3dca0d803" volumeName="kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843578 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44b07d33-6e84-434e-9a14-431846620968" volumeName="kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843596 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843617 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843636 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" volumeName="kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843656 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" volumeName="kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843677 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16898873-740b-4b85-99cf-d25a28d4ab00" volumeName="kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843698 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29908b4a-0df5-4c46-b886-c968976c25fb" volumeName="kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843717 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" volumeName="kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843736 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70ccda5f-ca1a-4fce-b77f-a1132f85635a" volumeName="kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843755 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71a07622-3038-4b8c-b6bb-5f28a4115012" volumeName="kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843776 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08577c3c-73d8-47f4-ba30-aec11af51d40" volumeName="kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843797 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" volumeName="kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843819 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ddfc68-2612-42b6-ad11-6fe44f1cff60" volumeName="kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843839 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1799b6-85b0-4aed-8835-35cb3d8d1109" volumeName="kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843859 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" volumeName="kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843878 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d91fa6bb-0c88-4930-884a-67e840d58a9f" volumeName="kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843900 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="048f4455-d99a-407b-8674-60efc7aa6ecb" volumeName="kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843921 17411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab" volumeName="kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls" seLinuxMountContext="" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843941 17411 reconstruct.go:97] "Volume reconstruction finished" Feb 23 13:06:46.844132 master-0 kubenswrapper[17411]: I0223 13:06:46.843957 17411 reconciler.go:26] "Reconciler: start to sync state" Feb 23 13:06:46.848863 master-0 kubenswrapper[17411]: I0223 13:06:46.848790 17411 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 13:06:46.863031 master-0 kubenswrapper[17411]: I0223 13:06:46.862780 17411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 13:06:46.868409 master-0 kubenswrapper[17411]: I0223 13:06:46.867348 17411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 13:06:46.868409 master-0 kubenswrapper[17411]: I0223 13:06:46.867425 17411 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 13:06:46.868409 master-0 kubenswrapper[17411]: I0223 13:06:46.867469 17411 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 13:06:46.868409 master-0 kubenswrapper[17411]: E0223 13:06:46.868304 17411 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 13:06:46.870705 master-0 kubenswrapper[17411]: I0223 13:06:46.870633 17411 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:06:46.882090 master-0 kubenswrapper[17411]: I0223 13:06:46.882037 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1860bead-61b8-4678-b583-c13c79575ef4/installer/0.log" Feb 23 13:06:46.882294 master-0 kubenswrapper[17411]: I0223 13:06:46.882099 17411 generic.go:334] "Generic (PLEG): container finished" podID="1860bead-61b8-4678-b583-c13c79575ef4" containerID="923861d3e14f9f1ed180c6fc4f602226ba1fa39cb2d6ada3746794e2192c190f" exitCode=1 Feb 23 13:06:46.883620 master-0 kubenswrapper[17411]: I0223 13:06:46.883582 17411 generic.go:334] "Generic (PLEG): container finished" podID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerID="43e1e42f0f51b9501eada9df5600a37753dcd2c27cc6181d29c70a1a9b841cdd" exitCode=0 Feb 23 13:06:46.886645 master-0 kubenswrapper[17411]: I0223 13:06:46.886603 17411 generic.go:334] "Generic (PLEG): container finished" podID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" containerID="3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9" exitCode=0 Feb 23 13:06:46.899588 master-0 kubenswrapper[17411]: I0223 13:06:46.899451 17411 generic.go:334] "Generic (PLEG): container finished" podID="c159d5f4-5c95-4600-80ec-a17a419cfd7a" containerID="6a3071ee7afe1d84c717a0f5829e74858f0e8791b2e3d45c88b0d153dec1ab43" exitCode=0 Feb 23 13:06:46.910716 master-0 kubenswrapper[17411]: I0223 13:06:46.910652 17411 generic.go:334] "Generic (PLEG): container finished" podID="0a80d5ac-27ce-4ba9-809e-28c86b80163b" containerID="1c78631b268af69806ac6e44c535cf690809e56173b2809b3ab9b30ce469dd12" exitCode=0 Feb 23 13:06:46.915487 master-0 kubenswrapper[17411]: I0223 13:06:46.915418 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" exitCode=0 Feb 23 13:06:46.915487 master-0 kubenswrapper[17411]: I0223 13:06:46.915473 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" exitCode=0 Feb 23 13:06:46.915487 master-0 kubenswrapper[17411]: I0223 13:06:46.915485 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" exitCode=0 Feb 23 13:06:46.917816 master-0 kubenswrapper[17411]: I0223 13:06:46.917746 17411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d" exitCode=1 Feb 23 13:06:46.919588 master-0 kubenswrapper[17411]: I0223 13:06:46.919525 17411 generic.go:334] "Generic (PLEG): container finished" podID="c0520301-1a6b-49ca-acca-011692d5b784" containerID="f52728fcdc20113e5e153a7f773c95297fdf5d76daa1b4959be887f3eec9a44d" exitCode=0 Feb 23 13:06:46.922986 master-0 kubenswrapper[17411]: I0223 13:06:46.922937 17411 generic.go:334] "Generic (PLEG): container finished" podID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerID="f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259" exitCode=0 Feb 23 13:06:46.926458 master-0 kubenswrapper[17411]: I0223 13:06:46.926412 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/1.log" Feb 23 13:06:46.926548 master-0 kubenswrapper[17411]: I0223 13:06:46.926463 17411 generic.go:334] "Generic (PLEG): container finished" podID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerID="548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367" exitCode=255 Feb 23 13:06:46.931208 master-0 kubenswrapper[17411]: I0223 13:06:46.931141 17411 generic.go:334] "Generic (PLEG): container finished" podID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" containerID="fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb" exitCode=0 Feb 23 13:06:46.934612 master-0 kubenswrapper[17411]: I0223 13:06:46.934548 17411 generic.go:334] "Generic (PLEG): container finished" podID="29908b4a-0df5-4c46-b886-c968976c25fb" containerID="f1f8754c5384bd933de1355ed0d4210b1fe7bc06bbbe4e8dc3bb20c9c6ae8499" exitCode=0 Feb 23 13:06:46.934612 master-0 kubenswrapper[17411]: I0223 13:06:46.934597 17411 generic.go:334] "Generic (PLEG): container finished" podID="29908b4a-0df5-4c46-b886-c968976c25fb" containerID="95e946dd400ab3361a407271ad87765a76061201b898907bfd81d61a000c3f70" exitCode=0 Feb 23 13:06:46.937111 master-0 kubenswrapper[17411]: I0223 13:06:46.937070 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/0.log" Feb 23 13:06:46.937558 master-0 kubenswrapper[17411]: I0223 13:06:46.937514 17411 generic.go:334] "Generic (PLEG): container finished" podID="3d82f223-e28b-4917-8513-3ca5c6e9bff7" containerID="c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e" exitCode=1 Feb 23 13:06:46.947182 master-0 kubenswrapper[17411]: I0223 13:06:46.947145 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:06:46.947603 master-0 kubenswrapper[17411]: I0223 13:06:46.947530 17411 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46" exitCode=1 Feb 23 13:06:46.951721 master-0 kubenswrapper[17411]: I0223 13:06:46.951670 17411 generic.go:334] "Generic (PLEG): container finished" podID="b4c51b25-f013-4f5c-acbd-598350468192" containerID="c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405" exitCode=0 Feb 23 13:06:46.953169 master-0 kubenswrapper[17411]: I0223 13:06:46.953129 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2e50127-3c2e-4514-ace5-2cf6f9223abf" containerID="87320ceaa2976029b0853261379f23dc5fc274ad76d399f47415010358a9fd41" exitCode=0 Feb 23 13:06:46.956021 master-0 kubenswrapper[17411]: I0223 13:06:46.955956 17411 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="cde99f61030d5fde6382d6afa69998ae8c2f31acfb6e6f4017c5ade4d9e4754a" exitCode=0 Feb 23 13:06:46.956021 master-0 kubenswrapper[17411]: I0223 13:06:46.955987 17411 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="07876e9794bd8ca67f2728050ff6edcd802e3171d1b608edbf504131457eacb4" exitCode=0 Feb 23 13:06:46.956021 master-0 kubenswrapper[17411]: I0223 13:06:46.956023 17411 generic.go:334] "Generic (PLEG): container finished" podID="24dab1bc-cf56-429b-93ce-911970c41b5c" containerID="e0b0c5dcd2cd007a994c23cec23f8805edde2250fc578b36745a7a529644718b" exitCode=0 Feb 23 13:06:46.957818 master-0 kubenswrapper[17411]: I0223 13:06:46.957783 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/1.log" Feb 23 13:06:46.958141 master-0 kubenswrapper[17411]: I0223 13:06:46.958104 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475" exitCode=255 Feb 23 13:06:46.958141 master-0 kubenswrapper[17411]: I0223 13:06:46.958131 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="a097939ffa402c84b79b8f7d24af36dfd241d3d508ee58d590cce7445e784fed" exitCode=0 Feb 23 13:06:46.960798 master-0 kubenswrapper[17411]: I0223 13:06:46.960756 17411 generic.go:334] "Generic (PLEG): container finished" podID="9c3f9dc5-d10d-452c-bf5d-c5830a444617" containerID="d0de1e6343e6391d3758c50779d73db6f7290912532fe3316a0336e90448c6db" exitCode=0 Feb 23 13:06:46.960798 master-0 kubenswrapper[17411]: I0223 13:06:46.960783 17411 generic.go:334] "Generic (PLEG): container finished" podID="9c3f9dc5-d10d-452c-bf5d-c5830a444617" containerID="575e8eb2d638c0aaa08f496c1356ae98d7c6f7469dbf105d6341ad7a0b64e752" exitCode=0 Feb 23 13:06:46.963548 master-0 kubenswrapper[17411]: I0223 13:06:46.963519 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 23 13:06:46.963999 master-0 kubenswrapper[17411]: I0223 13:06:46.963955 17411 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd" exitCode=1 Feb 23 13:06:46.963999 master-0 kubenswrapper[17411]: I0223 13:06:46.963994 17411 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2" exitCode=0 Feb 23 13:06:46.966067 master-0 kubenswrapper[17411]: I0223 13:06:46.966029 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2d8a9026-ee0a-44c4-9c90-cd863f5461dd/installer/0.log" Feb 23 13:06:46.966141 master-0 kubenswrapper[17411]: I0223 13:06:46.966068 17411 generic.go:334] "Generic (PLEG): container finished" podID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerID="76debd76d1c83d2501b62235b0e22ba16bdbcca50bf40d8506d768b4e775ec89" exitCode=1 Feb 23 13:06:46.967436 master-0 kubenswrapper[17411]: I0223 13:06:46.967404 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/0.log" Feb 23 13:06:46.967505 master-0 kubenswrapper[17411]: I0223 13:06:46.967439 17411 generic.go:334] "Generic (PLEG): container finished" podID="4bc22782-a369-48aa-a0e8-c1c63ffa3053" containerID="0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703" exitCode=1 Feb 23 13:06:46.968421 master-0 kubenswrapper[17411]: E0223 13:06:46.968385 17411 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:06:46.969116 master-0 kubenswrapper[17411]: I0223 13:06:46.969080 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/0.log" Feb 23 13:06:46.969178 master-0 kubenswrapper[17411]: I0223 13:06:46.969117 17411 generic.go:334] "Generic (PLEG): container finished" podID="c0d6008c-6e09-4e61-83a5-60456ca90e1e" containerID="49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26" exitCode=1 Feb 23 13:06:46.972015 master-0 kubenswrapper[17411]: I0223 13:06:46.971953 17411 generic.go:334] "Generic (PLEG): container finished" podID="ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2" containerID="860a9e244b04d91c3a33beb656c339e8751b53849a1636cd6eb8994e31e07960" exitCode=0 Feb 23 13:06:46.973990 master-0 kubenswrapper[17411]: I0223 13:06:46.973950 17411 generic.go:334] "Generic (PLEG): container finished" podID="b48d5b87-189b-45b6-ba55-37bd22d59eb6" containerID="0cd30e8676779569aa21305583cf916e9593358a307866f2fe5ad8cf68542eb9" exitCode=0 Feb 23 13:06:46.973990 master-0 kubenswrapper[17411]: I0223 13:06:46.973976 17411 generic.go:334] "Generic (PLEG): container finished" podID="b48d5b87-189b-45b6-ba55-37bd22d59eb6" containerID="d2baf7def32d6ff8e0d60946c5533f6a35fc42b4bd00e227486661e9d86637b2" exitCode=0 Feb 23 13:06:46.975359 master-0 kubenswrapper[17411]: I0223 13:06:46.975325 17411 generic.go:334] "Generic (PLEG): container finished" podID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerID="cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb" exitCode=0 Feb 23 13:06:46.977966 master-0 kubenswrapper[17411]: I0223 13:06:46.977895 17411 generic.go:334] "Generic (PLEG): container finished" podID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerID="611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a" exitCode=0 Feb 23 13:06:46.980443 master-0 kubenswrapper[17411]: I0223 13:06:46.980411 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-gswst_dcd03d6e-4c8c-400a-8001-343aaeeca93b/ingress-operator/0.log" Feb 23 13:06:46.980498 master-0 kubenswrapper[17411]: I0223 13:06:46.980449 17411 generic.go:334] "Generic (PLEG): container finished" podID="dcd03d6e-4c8c-400a-8001-343aaeeca93b" containerID="d573c3e0e8ebb6202d8c5ebe9e0d85b859c5927b89cbdd3a205e10371f242b28" exitCode=1 Feb 23 13:06:46.984310 master-0 kubenswrapper[17411]: I0223 13:06:46.984274 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/0.log" Feb 23 13:06:46.984394 master-0 kubenswrapper[17411]: I0223 13:06:46.984308 17411 generic.go:334] "Generic (PLEG): container finished" podID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" containerID="debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920" exitCode=1 Feb 23 13:06:46.986674 master-0 kubenswrapper[17411]: I0223 13:06:46.986628 17411 generic.go:334] "Generic (PLEG): container finished" podID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" containerID="723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830" exitCode=0 Feb 23 13:06:46.988984 master-0 kubenswrapper[17411]: I0223 13:06:46.988941 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="2a70c0c29b6d30120d04b79d2da1e4abf09061bb5671dd422b5ce63244e7fbf8" exitCode=0 Feb 23 13:06:46.988984 master-0 kubenswrapper[17411]: I0223 13:06:46.988967 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d7c78d97c5c5cb888cf7f64ec84b51fa9486a9d5d5840d99c65981486e968902" exitCode=0 Feb 23 13:06:46.988984 master-0 kubenswrapper[17411]: I0223 13:06:46.988979 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="313dcd35e66618a3a3a009757d79bf6b3b9afb4f0c77e372c518f0c8a219ea2f" exitCode=0 Feb 23 13:06:46.989149 master-0 kubenswrapper[17411]: I0223 13:06:46.988991 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="aa169cb62afad633a7432fb996d7a5e8546ab3591767d1cbb4ee55535e914204" exitCode=0 Feb 23 13:06:46.989149 master-0 kubenswrapper[17411]: I0223 13:06:46.989004 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="d363f0290cd5f73712e4ac4fe33436a5021a7548f84e19592e8c13df6abe2ebb" exitCode=0 Feb 23 13:06:46.989149 master-0 kubenswrapper[17411]: I0223 13:06:46.989014 17411 generic.go:334] "Generic (PLEG): container finished" podID="65ddfc68-2612-42b6-ad11-6fe44f1cff60" containerID="a490aeb54094c79e65d9b093b1d71d57a70012d976fefb24957c763212ff701d" exitCode=0 Feb 23 13:06:46.991856 master-0 kubenswrapper[17411]: I0223 13:06:46.991797 17411 generic.go:334] "Generic (PLEG): container finished" podID="85958edf-e3da-4704-8f09-cf049101f2e6" containerID="bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f" exitCode=0 Feb 23 13:06:46.993034 master-0 kubenswrapper[17411]: I0223 13:06:46.992997 17411 generic.go:334] "Generic (PLEG): container finished" podID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" containerID="8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956" exitCode=0 Feb 23 13:06:46.997141 master-0 kubenswrapper[17411]: I0223 13:06:46.997100 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78" exitCode=0 Feb 23 13:06:46.999270 master-0 kubenswrapper[17411]: I0223 13:06:46.999219 17411 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe" exitCode=0 Feb 23 13:06:46.999270 master-0 kubenswrapper[17411]: I0223 13:06:46.999256 17411 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33" exitCode=0 Feb 23 13:06:46.999270 master-0 kubenswrapper[17411]: I0223 13:06:46.999269 17411 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b" exitCode=0 Feb 23 13:06:47.004498 master-0 kubenswrapper[17411]: I0223 13:06:47.003688 17411 generic.go:334] "Generic (PLEG): container finished" podID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerID="19aea6b0c64c2242c1162a5644f9c7d995fa9caa7710602094da7d8d77b66e03" exitCode=0 Feb 23 13:06:47.005751 master-0 kubenswrapper[17411]: I0223 13:06:47.005709 17411 generic.go:334] "Generic (PLEG): container finished" podID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerID="22927b186dd20d4435230884e99b7e79937083b7c678e2250219b649223f7070" exitCode=0 Feb 23 13:06:47.008146 master-0 kubenswrapper[17411]: I0223 13:06:47.008112 17411 generic.go:334] "Generic (PLEG): container finished" podID="25b5540c-da7d-4b6f-a15f-394451f4674e" containerID="c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39" exitCode=0 Feb 23 13:06:47.019001 master-0 kubenswrapper[17411]: I0223 13:06:47.018934 17411 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="dfd86a94ccff1eeb13e1ddaabeeeb38c3d4bc54e7d5689b425d76ab48acf7562" exitCode=0 Feb 23 13:06:47.019001 master-0 kubenswrapper[17411]: I0223 13:06:47.018985 17411 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e" exitCode=0 Feb 23 13:06:47.023482 master-0 kubenswrapper[17411]: I0223 13:06:47.023429 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/1.log" Feb 23 13:06:47.023555 master-0 kubenswrapper[17411]: I0223 13:06:47.023494 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045" exitCode=1 Feb 23 13:06:47.035276 master-0 kubenswrapper[17411]: I0223 13:06:47.035207 17411 generic.go:334] "Generic (PLEG): container finished" podID="0128982b-01b4-49cb-ab4a-8759b844c86b" containerID="13f118397154c0722bc4d67c0e8029845516c7227b9d9347ffbb69f6316914e4" exitCode=0 Feb 23 13:06:47.035276 master-0 kubenswrapper[17411]: I0223 13:06:47.035264 17411 generic.go:334] "Generic (PLEG): container finished" podID="0128982b-01b4-49cb-ab4a-8759b844c86b" containerID="724a8df1a9b3d2adc3e5862fae8386b6be43fcc540a79a07de74b8360f4c034d" exitCode=0 Feb 23 13:06:47.038118 master-0 kubenswrapper[17411]: I0223 13:06:47.037840 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_04a14e09-67c1-45e9-af34-bccb2fe3757e/installer/0.log" Feb 23 13:06:47.038118 master-0 kubenswrapper[17411]: I0223 13:06:47.037900 17411 generic.go:334] "Generic (PLEG): container finished" podID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerID="88e0e24f4f045d3a42d1ee4cfb99a951aeace5cf2e7bece4bd5f41827f8965f5" exitCode=1 Feb 23 13:06:47.039890 master-0 kubenswrapper[17411]: I0223 13:06:47.039856 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/0.log" Feb 23 13:06:47.040227 master-0 kubenswrapper[17411]: I0223 13:06:47.040189 17411 generic.go:334] "Generic (PLEG): container finished" podID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerID="b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e" exitCode=1 Feb 23 13:06:47.168664 master-0 kubenswrapper[17411]: E0223 13:06:47.168551 17411 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 23 13:06:47.261691 master-0 kubenswrapper[17411]: I0223 13:06:47.261343 17411 manager.go:324] Recovery completed Feb 23 13:06:47.347012 master-0 kubenswrapper[17411]: I0223 13:06:47.346901 17411 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 13:06:47.347012 master-0 kubenswrapper[17411]: I0223 13:06:47.346931 17411 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 13:06:47.347012 master-0 kubenswrapper[17411]: I0223 13:06:47.346954 17411 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:06:47.347432 master-0 kubenswrapper[17411]: I0223 13:06:47.347138 17411 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 23 13:06:47.347432 master-0 kubenswrapper[17411]: I0223 13:06:47.347152 17411 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 23 13:06:47.347432 master-0 kubenswrapper[17411]: I0223 13:06:47.347178 17411 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 23 13:06:47.347432 master-0 kubenswrapper[17411]: I0223 13:06:47.347190 17411 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 23 13:06:47.347432 master-0 kubenswrapper[17411]: I0223 13:06:47.347200 17411 policy_none.go:49] "None policy: Start" Feb 23 13:06:47.354832 master-0 kubenswrapper[17411]: I0223 13:06:47.354772 17411 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 13:06:47.354832 master-0 kubenswrapper[17411]: I0223 13:06:47.354820 17411 state_mem.go:35] "Initializing new in-memory state store" Feb 23 13:06:47.355109 master-0 kubenswrapper[17411]: I0223 13:06:47.355074 17411 state_mem.go:75] "Updated machine memory state" Feb 23 13:06:47.355109 master-0 kubenswrapper[17411]: I0223 13:06:47.355091 17411 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 23 13:06:47.372163 master-0 kubenswrapper[17411]: I0223 13:06:47.372007 17411 manager.go:334] "Starting Device Plugin manager" Feb 23 13:06:47.372163 master-0 kubenswrapper[17411]: I0223 13:06:47.372069 17411 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 13:06:47.372163 master-0 kubenswrapper[17411]: I0223 13:06:47.372088 17411 server.go:79] "Starting device plugin registration server" Feb 23 13:06:47.372829 master-0 kubenswrapper[17411]: I0223 13:06:47.372770 17411 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 13:06:47.372957 master-0 kubenswrapper[17411]: I0223 13:06:47.372800 17411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 13:06:47.372957 master-0 kubenswrapper[17411]: I0223 13:06:47.372950 17411 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 13:06:47.373215 master-0 kubenswrapper[17411]: I0223 13:06:47.373095 17411 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 13:06:47.373215 master-0 kubenswrapper[17411]: I0223 13:06:47.373112 17411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 13:06:47.473008 master-0 kubenswrapper[17411]: I0223 13:06:47.472951 17411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:06:47.475780 master-0 kubenswrapper[17411]: I0223 13:06:47.475740 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:06:47.475832 master-0 kubenswrapper[17411]: I0223 13:06:47.475794 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:06:47.475832 master-0 kubenswrapper[17411]: I0223 13:06:47.475813 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:06:47.475952 master-0 kubenswrapper[17411]: I0223 13:06:47.475925 17411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:06:47.481513 master-0 kubenswrapper[17411]: E0223 13:06:47.481455 17411 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:06:47.569117 master-0 kubenswrapper[17411]: I0223 13:06:47.568963 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:06:47.570751 master-0 kubenswrapper[17411]: I0223 13:06:47.570607 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55c80b452ec57080fce8905969e2a9fba190533481c5ba5b0159b45e85104dd" Feb 23 13:06:47.570751 master-0 kubenswrapper[17411]: I0223 13:06:47.570658 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a95e454bc009280f30c693dc88db93f3cc1480aff05204c4d58205b2ffec4b" Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570826 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570920 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570938 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570950 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570962 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05c8e14cb165534672d5ddc06061f8f2","Type":"ContainerStarted","Data":"3dcb59345b5bc0117b6a00f1149c42a48da8235be304949c4a08edf500dfc736"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.570993 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571011 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571027 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571042 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571056 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571072 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571090 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9"} Feb 23 13:06:47.571096 master-0 kubenswrapper[17411]: I0223 13:06:47.571106 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571123 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"e5215076a24da7b39e84679bbfcb310a83f91ce7772234df3fcbb41f2f595a40"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571140 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571157 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571174 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"c787706f881864850a5752d9ba5df7143c1f6317da14cf839c1de55559b98021"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571199 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac778133e25eb465803a668164b009d4ef07614c0d72a48dbffcdcb57920e9f5" Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571223 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b608c73a48de5d50e74c55aca28591372e15d9f2c907a4169def9790022466af" Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571315 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a8cc9938769758481eeb507a8a511e4fea4ac8603da42445f1e6fa2500df33" Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571359 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835102869e1f66afd25840f4e26fbf1c829644e975ef14b09eb97d3f81d79a06" Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571414 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"0bb705c5c9f04251f2f3ae5ef9f44d40f3c6c1b144c3946a4cd25703a7f7278f"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571433 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"6309b849305c2ac7e7421c226eeec915d4326c5ea8505a4a455386262b3b15bd"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571452 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"9b2e0681668d9a8b51eaa2c8d5041d6128575d63543d355f03fa756ab6c575b2"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571470 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"f678b337016f7dc45aece4a578c752c553927db2e4cd56688db82afa6521fb02"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571532 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88facd6cceb823d7867c66655ebb82fc519bdd5794630121e38248005478c94" Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571631 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"39fda2f491fa2a50f4f315b834ed6d23","Type":"ContainerStarted","Data":"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571648 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"39fda2f491fa2a50f4f315b834ed6d23","Type":"ContainerStarted","Data":"1e4a89c63867c66249f3be8d13ff9c7bfaab9b37c45169bdf97b3f2b62ddd38e"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571745 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f"} Feb 23 13:06:47.571756 master-0 kubenswrapper[17411]: I0223 13:06:47.571764 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571826 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571844 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571858 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571871 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerDied","Data":"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571886 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ed33f74deb6fdef2cfa169d8db13e51c","Type":"ContainerStarted","Data":"a356ead5da6fa11053b4f6032b0e4b23eab458d556eaf1bb2ab3b5d9b3aca4d2"} Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571912 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd68d3b1f759653fd820ab02c8905d3b26cab1cde130b09539ee365719ba231c" Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571927 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843d775bbad7c7fe41df23fb96ec59c3909440741cf205f5eb1b07a6fc2a50c5" Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571942 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d15a93ba101f5328b2e0d71137561810703895a3b87feba2b93ea3506aebbec" Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.571976 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0ac9833a4a3f15b07b847e1c79a77066ab7928b08e00ff39adc0773ff4cfb5" Feb 23 13:06:47.572465 master-0 kubenswrapper[17411]: I0223 13:06:47.572028 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5791c5d88fdddb4fe408255082461994583f6df86d1b6c29e0fb7f97bc9c0ae" Feb 23 13:06:47.597150 master-0 kubenswrapper[17411]: E0223 13:06:47.595310 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.597150 master-0 kubenswrapper[17411]: E0223 13:06:47.597006 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.597546 master-0 kubenswrapper[17411]: E0223 13:06:47.597484 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.597789 master-0 kubenswrapper[17411]: E0223 13:06:47.597738 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.598087 master-0 kubenswrapper[17411]: E0223 13:06:47.598049 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.650629 master-0 kubenswrapper[17411]: I0223 13:06:47.650541 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.650629 master-0 kubenswrapper[17411]: I0223 13:06:47.650619 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650659 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650789 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650873 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650914 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650945 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.650981 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651012 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651045 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651091 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651143 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651180 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651221 master-0 kubenswrapper[17411]: I0223 13:06:47.651213 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651322 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651356 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651388 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651423 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651477 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.651702 master-0 kubenswrapper[17411]: I0223 13:06:47.651579 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.681936 master-0 kubenswrapper[17411]: I0223 13:06:47.681855 17411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:06:47.685683 master-0 kubenswrapper[17411]: I0223 13:06:47.685638 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:06:47.685762 master-0 kubenswrapper[17411]: I0223 13:06:47.685690 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:06:47.685762 master-0 kubenswrapper[17411]: I0223 13:06:47.685710 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:06:47.685931 master-0 kubenswrapper[17411]: I0223 13:06:47.685898 17411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:06:47.690388 master-0 kubenswrapper[17411]: E0223 13:06:47.690313 17411 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 23 13:06:47.752062 master-0 kubenswrapper[17411]: I0223 13:06:47.751855 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752062 master-0 kubenswrapper[17411]: I0223 13:06:47.751955 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752062 master-0 kubenswrapper[17411]: I0223 13:06:47.752002 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752062 master-0 kubenswrapper[17411]: I0223 13:06:47.752046 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752078 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752198 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752313 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752416 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752454 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.752510 master-0 kubenswrapper[17411]: I0223 13:06:47.752468 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752564 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752603 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752572 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752660 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752690 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752711 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752788 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.752860 master-0 kubenswrapper[17411]: I0223 13:06:47.752853 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.752908 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.752968 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753042 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753058 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753085 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753129 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753133 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753165 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753201 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753212 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753236 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753315 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.753361 master-0 kubenswrapper[17411]: I0223 13:06:47.753367 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753351 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753322 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753425 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753368 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753443 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753478 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753514 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753547 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.754174 master-0 kubenswrapper[17411]: I0223 13:06:47.753602 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:06:47.767078 master-0 kubenswrapper[17411]: I0223 13:06:47.766995 17411 apiserver.go:52] "Watching apiserver" Feb 23 13:06:47.794179 master-0 kubenswrapper[17411]: I0223 13:06:47.794078 17411 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:06:47.796106 master-0 kubenswrapper[17411]: I0223 13:06:47.796017 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl","openshift-cluster-node-tuning-operator/tuned-75bpf","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-network-operator/iptables-alerter-qd2ns","openshift-ovn-kubernetes/ovnkube-node-45ncb","openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86","openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v","openshift-ingress-operator/ingress-operator-6569778c84-gswst","openshift-kube-apiserver/installer-1-master-0","openshift-marketplace/redhat-operators-bxqsd","openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx","openshift-cluster-version/cluster-version-operator-57476485-j4p78","openshift-dns-operator/dns-operator-8c7d49845-7466r","openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74","openshift-service-ca/service-ca-576b4d78bd-nds57","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/redhat-marketplace-r8xxs","openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm","assisted-installer/assisted-installer-controller-mtn6f","kube-system/bootstrap-kube-scheduler-master-0","openshift-dns/dns-default-rcn5b","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd","openshift-network-node-identity/network-node-identity-4wvxd","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h","openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v","openshift-marketplace/marketplace-operator-6f5488b997-28zcz","openshift-network-diagnostics/network-check-target-shl6r","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms","openshift-multus/multus-additional-cni-plugins-f7cf9","openshift-multus/network-metrics-daemon-kq2rk","openshift-network-operator/network-operator-7d7db75979-rmsq8","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924","openshift-etcd/installer-1-master-0","openshift-insights/insights-operator-59b498fcfb-xltpx","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn","openshift-apiserver/apiserver-6dcf85cb46-cmf75","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf","openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-marketplace/community-operators-mldw4","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl","openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p","openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn","openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp","openshift-dns/node-resolver-bq97v","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n","openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s","openshift-marketplace/certified-operators-sfrhg","openshift-multus/multus-rmz8z","openshift-config-operator/openshift-config-operator-6f47d587d6-p5488","openshift-controller-manager/controller-manager-59947b7887-xg2ln","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w"] Feb 23 13:06:47.796430 master-0 kubenswrapper[17411]: I0223 13:06:47.796377 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mtn6f" Feb 23 13:06:47.814226 master-0 kubenswrapper[17411]: I0223 13:06:47.814061 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:06:47.814548 master-0 kubenswrapper[17411]: I0223 13:06:47.814480 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:06:47.814673 master-0 kubenswrapper[17411]: I0223 13:06:47.814540 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.815147 master-0 kubenswrapper[17411]: I0223 13:06:47.815042 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:06:47.815147 master-0 kubenswrapper[17411]: I0223 13:06:47.815113 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:06:47.815621 master-0 kubenswrapper[17411]: I0223 13:06:47.815377 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:06:47.815621 master-0 kubenswrapper[17411]: I0223 13:06:47.815574 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:06:47.815958 master-0 kubenswrapper[17411]: I0223 13:06:47.815747 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:06:47.815958 master-0 kubenswrapper[17411]: I0223 13:06:47.815836 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:06:47.815958 master-0 kubenswrapper[17411]: I0223 13:06:47.815850 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.815967 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816015 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816186 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816459 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816483 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816617 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816696 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816758 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817057 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817191 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817205 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817085 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817219 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817347 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817077 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817449 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817114 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817538 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817096 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817568 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.816043 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817849 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.817998 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.818217 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.818681 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.819155 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:06:47.820171 master-0 kubenswrapper[17411]: I0223 13:06:47.819286 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.819790 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.820403 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.819863 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.820462 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.820272 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.821088 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.821190 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.821685 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.821695 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.822552 17411 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="9d320f59-640e-49f3-a17f-a4b8ea733d23" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.823826 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.828544 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.830459 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.833297 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.835593 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.835794 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:06:47.837308 master-0 kubenswrapper[17411]: I0223 13:06:47.835848 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 23 13:06:47.845149 master-0 kubenswrapper[17411]: I0223 13:06:47.845053 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:47.845593 master-0 kubenswrapper[17411]: I0223 13:06:47.845509 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.845593 master-0 kubenswrapper[17411]: I0223 13:06:47.845557 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.846086 master-0 kubenswrapper[17411]: I0223 13:06:47.846023 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:06:47.846293 master-0 kubenswrapper[17411]: I0223 13:06:47.846224 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:06:47.847185 master-0 kubenswrapper[17411]: I0223 13:06:47.847095 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:06:47.847327 master-0 kubenswrapper[17411]: I0223 13:06:47.847117 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.847327 master-0 kubenswrapper[17411]: I0223 13:06:47.847277 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:06:47.847650 master-0 kubenswrapper[17411]: I0223 13:06:47.847501 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:06:47.874432 master-0 kubenswrapper[17411]: I0223 13:06:47.848574 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.874432 master-0 kubenswrapper[17411]: I0223 13:06:47.848715 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:06:47.874432 master-0 kubenswrapper[17411]: I0223 13:06:47.848757 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:06:47.883143 master-0 kubenswrapper[17411]: I0223 13:06:47.882888 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:47.883143 master-0 kubenswrapper[17411]: I0223 13:06:47.883027 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:06:47.883143 master-0 kubenswrapper[17411]: I0223 13:06:47.883075 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.883143 master-0 kubenswrapper[17411]: I0223 13:06:47.883112 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:47.883143 master-0 kubenswrapper[17411]: I0223 13:06:47.883159 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883191 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883237 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883339 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883377 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883427 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883481 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883533 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.883578 master-0 kubenswrapper[17411]: I0223 13:06:47.883575 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883618 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883653 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883682 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883713 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883747 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883775 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883805 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:47.883869 master-0 kubenswrapper[17411]: I0223 13:06:47.883841 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.883879 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.883912 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.883956 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.884003 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.884046 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.884074 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.884111 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:47.884164 master-0 kubenswrapper[17411]: I0223 13:06:47.884148 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884184 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884221 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884264 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884341 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884390 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:47.884468 master-0 kubenswrapper[17411]: I0223 13:06:47.884445 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884483 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884511 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884542 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884586 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884615 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884655 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:47.884697 master-0 kubenswrapper[17411]: I0223 13:06:47.884684 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884714 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884743 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884784 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884829 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884865 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884901 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.884953 master-0 kubenswrapper[17411]: I0223 13:06:47.884928 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.884968 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885001 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885032 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885071 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885100 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885140 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885174 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.885213 master-0 kubenswrapper[17411]: I0223 13:06:47.885210 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885261 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885298 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885330 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885366 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885406 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885440 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885477 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885507 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885542 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:47.885615 master-0 kubenswrapper[17411]: I0223 13:06:47.885573 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885617 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885666 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885730 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885772 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885809 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885854 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885891 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885926 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.885978 master-0 kubenswrapper[17411]: I0223 13:06:47.885962 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.885995 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886031 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886062 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886090 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886116 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886140 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886169 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886196 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886223 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886265 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.886317 master-0 kubenswrapper[17411]: I0223 13:06:47.886305 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:47.886739 master-0 kubenswrapper[17411]: I0223 13:06:47.886371 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:47.886739 master-0 kubenswrapper[17411]: I0223 13:06:47.886411 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:47.886739 master-0 kubenswrapper[17411]: I0223 13:06:47.886442 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.886739 master-0 kubenswrapper[17411]: I0223 13:06:47.886467 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.886739 master-0 kubenswrapper[17411]: I0223 13:06:47.886511 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:47.886914 master-0 kubenswrapper[17411]: I0223 13:06:47.886891 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5540c-da7d-4b6f-a15f-394451f4674e-config\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:47.887260 master-0 kubenswrapper[17411]: I0223 13:06:47.887211 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:06:47.887697 master-0 kubenswrapper[17411]: I0223 13:06:47.887671 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.888031 master-0 kubenswrapper[17411]: I0223 13:06:47.888003 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/24dab1bc-cf56-429b-93ce-911970c41b5c-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:47.888420 master-0 kubenswrapper[17411]: I0223 13:06:47.888394 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-config\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:47.888516 master-0 kubenswrapper[17411]: I0223 13:06:47.888494 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c2b80534-3c9d-4ddb-9215-d50d63294c7c-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:47.888944 master-0 kubenswrapper[17411]: I0223 13:06:47.888917 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85958edf-e3da-4704-8f09-cf049101f2e6-metrics-tls\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.889692 master-0 kubenswrapper[17411]: I0223 13:06:47.889664 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08577c3c-73d8-47f4-ba30-aec11af51d40-metrics-tls\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:06:47.890044 master-0 kubenswrapper[17411]: I0223 13:06:47.890009 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da5d5997-e45f-4858-a9a9-e880bc222caf-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:47.890276 master-0 kubenswrapper[17411]: I0223 13:06:47.890236 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80d5ac-27ce-4ba9-809e-28c86b80163b-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:47.890759 master-0 kubenswrapper[17411]: I0223 13:06:47.890731 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-srv-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:47.890907 master-0 kubenswrapper[17411]: I0223 13:06:47.890854 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:06:47.890948 master-0 kubenswrapper[17411]: I0223 13:06:47.890883 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:06:47.891148 master-0 kubenswrapper[17411]: I0223 13:06:47.891116 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cbcca259-0dbf-48ca-bf90-eec638dcdd10-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:47.891198 master-0 kubenswrapper[17411]: I0223 13:06:47.891140 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 23 13:06:47.891329 master-0 kubenswrapper[17411]: I0223 13:06:47.891282 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:06:47.891521 master-0 kubenswrapper[17411]: I0223 13:06:47.891489 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:06:47.891667 master-0 kubenswrapper[17411]: I0223 13:06:47.891635 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 23 13:06:47.891713 master-0 kubenswrapper[17411]: I0223 13:06:47.891665 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:06:47.891772 master-0 kubenswrapper[17411]: I0223 13:06:47.891291 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/24dab1bc-cf56-429b-93ce-911970c41b5c-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:47.891882 master-0 kubenswrapper[17411]: I0223 13:06:47.891862 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:06:47.891939 master-0 kubenswrapper[17411]: I0223 13:06:47.891913 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:47.892471 master-0 kubenswrapper[17411]: I0223 13:06:47.892420 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b5540c-da7d-4b6f-a15f-394451f4674e-serving-cert\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:47.892535 master-0 kubenswrapper[17411]: I0223 13:06:47.892509 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:06:47.892630 master-0 kubenswrapper[17411]: I0223 13:06:47.892592 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-config\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.892688 master-0 kubenswrapper[17411]: I0223 13:06:47.892640 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-serving-cert\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.894566 master-0 kubenswrapper[17411]: I0223 13:06:47.894517 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-config\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:47.894656 master-0 kubenswrapper[17411]: I0223 13:06:47.894625 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-serving-cert\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.894748 master-0 kubenswrapper[17411]: I0223 13:06:47.894716 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:06:47.894840 master-0 kubenswrapper[17411]: I0223 13:06:47.894804 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:06:47.894897 master-0 kubenswrapper[17411]: I0223 13:06:47.894810 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:06:47.895087 master-0 kubenswrapper[17411]: I0223 13:06:47.895052 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:47.895416 master-0 kubenswrapper[17411]: I0223 13:06:47.895379 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:47.895620 master-0 kubenswrapper[17411]: I0223 13:06:47.895583 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:47.895773 master-0 kubenswrapper[17411]: I0223 13:06:47.895734 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:47.895932 master-0 kubenswrapper[17411]: I0223 13:06:47.895876 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:06:47.895973 master-0 kubenswrapper[17411]: I0223 13:06:47.895910 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1799b6-85b0-4aed-8835-35cb3d8d1109-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:47.896069 master-0 kubenswrapper[17411]: I0223 13:06:47.896048 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-ca\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.896216 master-0 kubenswrapper[17411]: I0223 13:06:47.896188 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-config\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.896282 master-0 kubenswrapper[17411]: I0223 13:06:47.896223 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-etcd-client\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:47.896282 master-0 kubenswrapper[17411]: I0223 13:06:47.896235 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80d5ac-27ce-4ba9-809e-28c86b80163b-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:47.896511 master-0 kubenswrapper[17411]: I0223 13:06:47.896486 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:06:47.896549 master-0 kubenswrapper[17411]: I0223 13:06:47.896527 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:47.896592 master-0 kubenswrapper[17411]: I0223 13:06:47.896550 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:47.896592 master-0 kubenswrapper[17411]: I0223 13:06:47.896280 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.896677 master-0 kubenswrapper[17411]: I0223 13:06:47.896602 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:06:47.896677 master-0 kubenswrapper[17411]: I0223 13:06:47.896650 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:06:47.896677 master-0 kubenswrapper[17411]: I0223 13:06:47.896660 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:06:47.896797 master-0 kubenswrapper[17411]: I0223 13:06:47.896743 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:06:47.896797 master-0 kubenswrapper[17411]: I0223 13:06:47.896762 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:06:47.896797 master-0 kubenswrapper[17411]: I0223 13:06:47.896771 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:06:47.896904 master-0 kubenswrapper[17411]: I0223 13:06:47.896819 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.896904 master-0 kubenswrapper[17411]: I0223 13:06:47.896831 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:06:47.896904 master-0 kubenswrapper[17411]: I0223 13:06:47.896878 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dcd03d6e-4c8c-400a-8001-343aaeeca93b-metrics-tls\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.897016 master-0 kubenswrapper[17411]: I0223 13:06:47.896910 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:06:47.897016 master-0 kubenswrapper[17411]: I0223 13:06:47.896952 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:06:47.897016 master-0 kubenswrapper[17411]: I0223 13:06:47.895684 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3dfb271-a659-45e0-b51d-5e99ec43b555-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.897122 master-0 kubenswrapper[17411]: I0223 13:06:47.897083 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:06:47.897122 master-0 kubenswrapper[17411]: I0223 13:06:47.897109 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:06:47.897187 master-0 kubenswrapper[17411]: I0223 13:06:47.896751 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:06:47.897233 master-0 kubenswrapper[17411]: I0223 13:06:47.897137 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1799b6-85b0-4aed-8835-35cb3d8d1109-config\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:47.897766 master-0 kubenswrapper[17411]: I0223 13:06:47.897730 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:06:47.897828 master-0 kubenswrapper[17411]: I0223 13:06:47.897779 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:06:47.898118 master-0 kubenswrapper[17411]: I0223 13:06:47.898083 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:06:47.899047 master-0 kubenswrapper[17411]: I0223 13:06:47.898993 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:06:47.899225 master-0 kubenswrapper[17411]: I0223 13:06:47.899182 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:06:47.899377 master-0 kubenswrapper[17411]: I0223 13:06:47.899337 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:06:47.899435 master-0 kubenswrapper[17411]: I0223 13:06:47.899386 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:06:47.899590 master-0 kubenswrapper[17411]: I0223 13:06:47.899555 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-daemon-config\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.899639 master-0 kubenswrapper[17411]: I0223 13:06:47.899611 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b80534-3c9d-4ddb-9215-d50d63294c7c-serving-cert\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:47.899712 master-0 kubenswrapper[17411]: I0223 13:06:47.899679 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-binary-copy\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.900334 master-0 kubenswrapper[17411]: I0223 13:06:47.900304 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:06:47.902869 master-0 kubenswrapper[17411]: I0223 13:06:47.902836 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:06:47.903078 master-0 kubenswrapper[17411]: I0223 13:06:47.903049 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:06:47.903258 master-0 kubenswrapper[17411]: I0223 13:06:47.903215 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:06:47.904606 master-0 kubenswrapper[17411]: I0223 13:06:47.904570 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.905777 master-0 kubenswrapper[17411]: I0223 13:06:47.905744 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.906322 master-0 kubenswrapper[17411]: I0223 13:06:47.906132 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee436961-c305-4c84-b4f9-175e1d8004fb-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:47.906637 master-0 kubenswrapper[17411]: I0223 13:06:47.906595 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cni-binary-copy\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.911291 master-0 kubenswrapper[17411]: I0223 13:06:47.906939 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee436961-c305-4c84-b4f9-175e1d8004fb-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:47.911291 master-0 kubenswrapper[17411]: I0223 13:06:47.907368 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.911291 master-0 kubenswrapper[17411]: I0223 13:06:47.907412 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:06:47.911291 master-0 kubenswrapper[17411]: I0223 13:06:47.907637 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:06:47.911291 master-0 kubenswrapper[17411]: I0223 13:06:47.910565 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:06:47.914983 master-0 kubenswrapper[17411]: I0223 13:06:47.914633 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:06:47.915325 master-0 kubenswrapper[17411]: I0223 13:06:47.915220 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:06:47.916715 master-0 kubenswrapper[17411]: I0223 13:06:47.916684 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:06:47.917515 master-0 kubenswrapper[17411]: I0223 13:06:47.917452 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd03d6e-4c8c-400a-8001-343aaeeca93b-trusted-ca\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:47.917515 master-0 kubenswrapper[17411]: I0223 13:06:47.917499 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dfb271-a659-45e0-b51d-5e99ec43b555-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:47.917758 master-0 kubenswrapper[17411]: I0223 13:06:47.917714 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:06:47.917949 master-0 kubenswrapper[17411]: I0223 13:06:47.917894 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:47.918907 master-0 kubenswrapper[17411]: I0223 13:06:47.918874 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:06:47.919257 master-0 kubenswrapper[17411]: I0223 13:06:47.919188 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:06:47.922014 master-0 kubenswrapper[17411]: I0223 13:06:47.921973 17411 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 23 13:06:47.922624 master-0 kubenswrapper[17411]: I0223 13:06:47.922576 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d953c37-1b74-4ce5-89cb-b3f53454fc57-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:47.927619 master-0 kubenswrapper[17411]: I0223 13:06:47.927567 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:47.939183 master-0 kubenswrapper[17411]: I0223 13:06:47.939136 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:06:47.958991 master-0 kubenswrapper[17411]: I0223 13:06:47.958943 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:06:47.979219 master-0 kubenswrapper[17411]: I0223 13:06:47.979172 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:06:47.987783 master-0 kubenswrapper[17411]: I0223 13:06:47.987705 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.987939 master-0 kubenswrapper[17411]: I0223 13:06:47.987901 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-os-release\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.988038 master-0 kubenswrapper[17411]: I0223 13:06:47.987995 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.988141 master-0 kubenswrapper[17411]: I0223 13:06:47.988106 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-bin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.988141 master-0 kubenswrapper[17411]: I0223 13:06:47.988119 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.988406 master-0 kubenswrapper[17411]: I0223 13:06:47.988216 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.988406 master-0 kubenswrapper[17411]: I0223 13:06:47.988343 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:47.988573 master-0 kubenswrapper[17411]: I0223 13:06:47.988423 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cx9\" (UniqueName: \"kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:47.988573 master-0 kubenswrapper[17411]: I0223 13:06:47.988428 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-tuned\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.988849 master-0 kubenswrapper[17411]: I0223 13:06:47.988607 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.988849 master-0 kubenswrapper[17411]: I0223 13:06:47.988786 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.988849 master-0 kubenswrapper[17411]: I0223 13:06:47.988834 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:47.989095 master-0 kubenswrapper[17411]: I0223 13:06:47.988874 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l49w\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:47.989095 master-0 kubenswrapper[17411]: I0223 13:06:47.988920 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:47.989095 master-0 kubenswrapper[17411]: I0223 13:06:47.988934 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85958edf-e3da-4704-8f09-cf049101f2e6-host-etc-kube\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:47.989095 master-0 kubenswrapper[17411]: I0223 13:06:47.988958 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:47.989095 master-0 kubenswrapper[17411]: I0223 13:06:47.989057 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-utilities\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989284 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989348 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989389 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989430 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989465 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989507 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.989556 master-0 kubenswrapper[17411]: I0223 13:06:47.989549 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989625 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-os-release\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989630 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-key\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989660 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989785 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989854 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989896 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4jv\" (UniqueName: \"kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989933 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.989990 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.990144 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.990218 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.990309 master-0 kubenswrapper[17411]: I0223 13:06:47.990311 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990356 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990410 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990454 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l6fp\" (UniqueName: \"kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990498 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990542 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990739 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990797 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990744 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-catalog-content\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990877 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990751 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.990930 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991005 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991035 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0128982b-01b4-49cb-ab4a-8759b844c86b-catalog-content\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991062 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991103 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991139 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbml7\" (UniqueName: \"kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991288 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991343 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpqn\" (UniqueName: \"kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991389 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbl2g\" (UniqueName: \"kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991435 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.991427 master-0 kubenswrapper[17411]: I0223 13:06:47.991483 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991526 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991574 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991620 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bfbb4d6d-7047-48cb-be03-97a57fc688e3-cache\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991726 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991750 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-utilities\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991747 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d82f223-e28b-4917-8513-3ca5c6e9bff7-webhook-cert\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991775 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991844 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991862 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-netns\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991890 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991869 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/54411ade-3383-48aa-ba10-62ffb40185b9-tmpfs\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.991946 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992161 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqd6\" (UniqueName: \"kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992236 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992366 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992464 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992517 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992550 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992581 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992646 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992717 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992769 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992807 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992843 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992935 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.992939 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34ad2537-b5fe-463f-8e95-f47cc886aa5e-tmp\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993037 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-env-overrides\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993060 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993152 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993196 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7fbab55-8405-44f4-ae2a-412c115ce411-metrics-certs\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993238 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993305 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993381 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993399 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-kubelet\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993421 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65tqd\" (UniqueName: \"kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993462 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts56d\" (UniqueName: \"kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993578 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70ccda5f-ca1a-4fce-b77f-a1132f85635a-snapshots\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993608 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993653 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993694 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993736 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-cnibin\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993737 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993820 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993862 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993921 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993927 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f9dc5-d10d-452c-bf5d-c5830a444617-utilities\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:47.993949 master-0 kubenswrapper[17411]: I0223 13:06:47.993959 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24gm8\" (UniqueName: \"kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994174 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994233 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994356 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsvs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994397 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994432 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994592 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994645 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994688 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994729 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8s7\" (UniqueName: \"kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994774 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994781 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-var-lib-cni-multus\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994832 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-script-lib\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994915 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.994973 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995028 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995075 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995112 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995150 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-env-overrides\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995149 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995218 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995270 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-catalog-content\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995306 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995344 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995353 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-socket-dir-parent\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995384 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995479 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0d6008c-6e09-4e61-83a5-60456ca90e1e-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995487 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995551 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995609 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995614 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995719 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995799 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995854 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995900 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-etc-kubernetes\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.995936 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996020 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996119 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzwh\" (UniqueName: \"kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996176 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996238 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996333 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996385 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996443 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996517 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29908b4a-0df5-4c46-b886-c968976c25fb-utilities\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996594 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996625 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/3d82f223-e28b-4917-8513-3ca5c6e9bff7-ovnkube-identity-cm\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996643 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996638 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996768 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996808 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996865 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2s4f\" (UniqueName: \"kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996871 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-multus-certs\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.996903 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlwn\" (UniqueName: \"kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997069 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997116 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997279 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zs2l\" (UniqueName: \"kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997354 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgsw\" (UniqueName: \"kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997418 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997721 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997767 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997795 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997823 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.997854 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdtv\" (UniqueName: \"kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998080 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998173 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998214 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998287 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f6sq\" (UniqueName: \"kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq\") pod \"migrator-5c85bff57-xj4vr\" (UID: \"ae5c9120-c38d-46c0-af43-9275563b1a67\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998327 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998459 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998785 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998797 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c51b25-f013-4f5c-acbd-598350468192-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998818 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998882 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmk8\" (UniqueName: \"kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998902 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998920 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.998939 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.999057 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:47.998956 master-0 kubenswrapper[17411]: I0223 13:06:47.999104 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999287 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999356 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2857n\" (UniqueName: \"kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999400 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999453 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999495 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71a07622-3038-4b8c-b6bb-5f28a4115012-signing-cabundle\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999499 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999603 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999647 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999682 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999718 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999758 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999802 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhl9t\" (UniqueName: \"kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999841 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999879 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999933 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:47.999971 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000013 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000054 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000071 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovnkube-config\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000099 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000141 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000187 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000231 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000295 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000338 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000377 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000441 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-multus-conf-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000477 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000477 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c51b25-f013-4f5c-acbd-598350468192-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000665 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-cnibin\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000772 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000808 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-system-cni-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000851 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000914 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000969 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001036 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.000784 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-host-run-k8s-cni-cncf-io\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001193 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/048f4455-d99a-407b-8674-60efc7aa6ecb-iptables-alerter-script\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001306 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65ddfc68-2612-42b6-ad11-6fe44f1cff60-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001331 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-hostroot\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001343 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265wg\" (UniqueName: \"kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001416 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b59f2a-7014-448c-9d3b-e38281f07dbc-system-cni-dir\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001418 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48d5b87-189b-45b6-ba55-37bd22d59eb6-catalog-content\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001456 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001593 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001670 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001685 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-ovn-node-metrics-cert\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001754 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001799 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgmq\" (UniqueName: \"kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq\") pod \"csi-snapshot-controller-6847bb4785-hgkrm\" (UID: \"4e6bc033-cd90-4704-b03a-8e9c6c0d3904\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:06:48.006296 master-0 kubenswrapper[17411]: I0223 13:06:48.001884 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj957\" (UniqueName: \"kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:48.009578 master-0 kubenswrapper[17411]: I0223 13:06:48.008702 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:06:48.020695 master-0 kubenswrapper[17411]: I0223 13:06:48.020656 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:06:48.022190 master-0 kubenswrapper[17411]: I0223 13:06:48.022079 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.039575 master-0 kubenswrapper[17411]: I0223 13:06:48.039506 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:06:48.041207 master-0 kubenswrapper[17411]: I0223 13:06:48.041125 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.059950 master-0 kubenswrapper[17411]: I0223 13:06:48.059873 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:06:48.079592 master-0 kubenswrapper[17411]: I0223 13:06:48.079525 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:06:48.089087 master-0 kubenswrapper[17411]: I0223 13:06:48.089000 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-client\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.090876 master-0 kubenswrapper[17411]: I0223 13:06:48.090801 17411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:06:48.094781 master-0 kubenswrapper[17411]: I0223 13:06:48.094697 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 23 13:06:48.094781 master-0 kubenswrapper[17411]: I0223 13:06:48.094771 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 23 13:06:48.095033 master-0 kubenswrapper[17411]: I0223 13:06:48.094802 17411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 23 13:06:48.095400 master-0 kubenswrapper[17411]: I0223 13:06:48.095358 17411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 23 13:06:48.100912 master-0 kubenswrapper[17411]: I0223 13:06:48.100828 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:06:48.101805 master-0 kubenswrapper[17411]: I0223 13:06:48.101737 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-serving-cert\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.103076 master-0 kubenswrapper[17411]: I0223 13:06:48.103008 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.103219 master-0 kubenswrapper[17411]: I0223 13:06:48.103078 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.103219 master-0 kubenswrapper[17411]: I0223 13:06:48.103092 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-kubelet\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.103219 master-0 kubenswrapper[17411]: I0223 13:06:48.103130 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.103484 master-0 kubenswrapper[17411]: I0223 13:06:48.103340 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.103484 master-0 kubenswrapper[17411]: I0223 13:06:48.103379 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.103484 master-0 kubenswrapper[17411]: I0223 13:06:48.103386 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-bin\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.103668 master-0 kubenswrapper[17411]: I0223 13:06:48.103534 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-modprobe-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.103668 master-0 kubenswrapper[17411]: I0223 13:06:48.103603 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:06:48.103802 master-0 kubenswrapper[17411]: I0223 13:06:48.103676 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.103802 master-0 kubenswrapper[17411]: I0223 13:06:48.103718 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.103913 master-0 kubenswrapper[17411]: I0223 13:06:48.103811 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-ssl-certs\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.103994 master-0 kubenswrapper[17411]: I0223 13:06:48.103852 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/031016de-897e-42bc-9de4-843122f64a75-hosts-file\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:06:48.103994 master-0 kubenswrapper[17411]: I0223 13:06:48.103917 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.104152 master-0 kubenswrapper[17411]: I0223 13:06:48.103999 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.104152 master-0 kubenswrapper[17411]: I0223 13:06:48.104026 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-var-lib-kubelet\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.104152 master-0 kubenswrapper[17411]: I0223 13:06:48.104097 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.104416 master-0 kubenswrapper[17411]: I0223 13:06:48.104195 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bfbb4d6d-7047-48cb-be03-97a57fc688e3-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.104416 master-0 kubenswrapper[17411]: I0223 13:06:48.104340 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.104553 master-0 kubenswrapper[17411]: I0223 13:06:48.104483 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.104553 master-0 kubenswrapper[17411]: I0223 13:06:48.104492 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-kubernetes\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.104679 master-0 kubenswrapper[17411]: I0223 13:06:48.104643 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-var-lib-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.104740 master-0 kubenswrapper[17411]: I0223 13:06:48.104666 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105027 master-0 kubenswrapper[17411]: I0223 13:06:48.104967 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105128 master-0 kubenswrapper[17411]: I0223 13:06:48.105042 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-lib-modules\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105128 master-0 kubenswrapper[17411]: I0223 13:06:48.105067 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.105299 master-0 kubenswrapper[17411]: I0223 13:06:48.105153 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-netns\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.105299 master-0 kubenswrapper[17411]: I0223 13:06:48.105155 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysconfig\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105482 master-0 kubenswrapper[17411]: I0223 13:06:48.105376 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:48.105482 master-0 kubenswrapper[17411]: I0223 13:06:48.105479 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105661 master-0 kubenswrapper[17411]: I0223 13:06:48.105508 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0d7283ee-8959-44b6-83fb-b152510485eb-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:48.105661 master-0 kubenswrapper[17411]: I0223 13:06:48.105550 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.105661 master-0 kubenswrapper[17411]: I0223 13:06:48.105594 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.105925 master-0 kubenswrapper[17411]: I0223 13:06:48.105716 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.105925 master-0 kubenswrapper[17411]: I0223 13:06:48.105776 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-conf\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105925 master-0 kubenswrapper[17411]: I0223 13:06:48.105840 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.105925 master-0 kubenswrapper[17411]: I0223 13:06:48.105879 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.105925 master-0 kubenswrapper[17411]: I0223 13:06:48.105884 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106024 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-audit-dir\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.105836 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-systemd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106057 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106112 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-host\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106169 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0520301-1a6b-49ca-acca-011692d5b784-audit-dir\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106194 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106218 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-systemd-units\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106400 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106560 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.106696 master-0 kubenswrapper[17411]: I0223 13:06:48.106637 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.106714 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-slash\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.106732 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107227 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107287 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-sysctl-d\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107572 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-ovn\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107643 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107715 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107728 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-node-log\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.107812 master-0 kubenswrapper[17411]: I0223 13:06:48.107811 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.107878 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.107943 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.108029 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-etc-systemd\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.108057 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-sys\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.108101 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c0d6008c-6e09-4e61-83a5-60456ca90e1e-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.108183 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.108387 master-0 kubenswrapper[17411]: I0223 13:06:48.108328 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c159d5f4-5c95-4600-80ec-a17a419cfd7a-node-pullsecrets\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.110053 master-0 kubenswrapper[17411]: I0223 13:06:48.109934 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.110053 master-0 kubenswrapper[17411]: I0223 13:06:48.110042 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.110276 master-0 kubenswrapper[17411]: I0223 13:06:48.110160 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.111753 master-0 kubenswrapper[17411]: I0223 13:06:48.110213 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34ad2537-b5fe-463f-8e95-f47cc886aa5e-run\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:48.111947 master-0 kubenswrapper[17411]: I0223 13:06:48.110099 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-log-socket\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.112621 master-0 kubenswrapper[17411]: I0223 13:06:48.112477 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-run-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.113793 master-0 kubenswrapper[17411]: I0223 13:06:48.113658 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:48.114091 master-0 kubenswrapper[17411]: I0223 13:06:48.114001 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.114295 master-0 kubenswrapper[17411]: I0223 13:06:48.114144 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.114782 master-0 kubenswrapper[17411]: I0223 13:06:48.114690 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-host-cni-netd\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.114928 master-0 kubenswrapper[17411]: I0223 13:06:48.114793 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/048f4455-d99a-407b-8674-60efc7aa6ecb-host-slash\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:48.114928 master-0 kubenswrapper[17411]: I0223 13:06:48.114879 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-etc-openvswitch\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:48.120404 master-0 kubenswrapper[17411]: I0223 13:06:48.120306 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:06:48.140525 master-0 kubenswrapper[17411]: I0223 13:06:48.140447 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:06:48.149211 master-0 kubenswrapper[17411]: I0223 13:06:48.149133 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c159d5f4-5c95-4600-80ec-a17a419cfd7a-encryption-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.159976 master-0 kubenswrapper[17411]: I0223 13:06:48.159893 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:06:48.162685 master-0 kubenswrapper[17411]: I0223 13:06:48.162621 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-etcd-serving-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.179812 master-0 kubenswrapper[17411]: I0223 13:06:48.179756 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:06:48.186285 master-0 kubenswrapper[17411]: I0223 13:06:48.186156 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-image-import-ca\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.211887 master-0 kubenswrapper[17411]: I0223 13:06:48.211776 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:06:48.212468 master-0 kubenswrapper[17411]: I0223 13:06:48.212405 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-trusted-ca-bundle\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.220647 master-0 kubenswrapper[17411]: I0223 13:06:48.220591 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:06:48.241464 master-0 kubenswrapper[17411]: I0223 13:06:48.241392 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:06:48.260715 master-0 kubenswrapper[17411]: I0223 13:06:48.260553 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:06:48.271099 master-0 kubenswrapper[17411]: I0223 13:06:48.271026 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c159d5f4-5c95-4600-80ec-a17a419cfd7a-config\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:48.280059 master-0 kubenswrapper[17411]: I0223 13:06:48.279995 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:06:48.282291 master-0 kubenswrapper[17411]: I0223 13:06:48.282201 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-metrics-tls\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:48.314561 master-0 kubenswrapper[17411]: I0223 13:06:48.314476 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:48.318198 master-0 kubenswrapper[17411]: I0223 13:06:48.318103 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:06:48.320219 master-0 kubenswrapper[17411]: I0223 13:06:48.320175 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:06:48.322171 master-0 kubenswrapper[17411]: I0223 13:06:48.322097 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:48.322527 master-0 kubenswrapper[17411]: I0223 13:06:48.322196 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-config-volume\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:48.348888 master-0 kubenswrapper[17411]: I0223 13:06:48.348804 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:06:48.361295 master-0 kubenswrapper[17411]: I0223 13:06:48.361170 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:06:48.370386 master-0 kubenswrapper[17411]: I0223 13:06:48.370205 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/bfbb4d6d-7047-48cb-be03-97a57fc688e3-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.380193 master-0 kubenswrapper[17411]: I0223 13:06:48.380108 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:06:48.383370 master-0 kubenswrapper[17411]: I0223 13:06:48.383291 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:48.400240 master-0 kubenswrapper[17411]: I0223 13:06:48.400120 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:06:48.403219 master-0 kubenswrapper[17411]: I0223 13:06:48.403154 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-encryption-config\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.419198 master-0 kubenswrapper[17411]: I0223 13:06:48.419131 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:06:48.422272 master-0 kubenswrapper[17411]: I0223 13:06:48.422174 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-etcd-serving-ca\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.440508 master-0 kubenswrapper[17411]: I0223 13:06:48.440428 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:06:48.449319 master-0 kubenswrapper[17411]: I0223 13:06:48.449225 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-etcd-client\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.460469 master-0 kubenswrapper[17411]: I0223 13:06:48.460417 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:06:48.466837 master-0 kubenswrapper[17411]: I0223 13:06:48.466788 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-audit-policies\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.479589 master-0 kubenswrapper[17411]: I0223 13:06:48.479507 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:06:48.482720 master-0 kubenswrapper[17411]: I0223 13:06:48.482656 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0520301-1a6b-49ca-acca-011692d5b784-trusted-ca-bundle\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.500854 master-0 kubenswrapper[17411]: I0223 13:06:48.500778 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:06:48.520364 master-0 kubenswrapper[17411]: I0223 13:06:48.520166 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:06:48.522737 master-0 kubenswrapper[17411]: I0223 13:06:48.522669 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0520301-1a6b-49ca-acca-011692d5b784-serving-cert\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:48.540503 master-0 kubenswrapper[17411]: I0223 13:06:48.540366 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:06:48.561839 master-0 kubenswrapper[17411]: I0223 13:06:48.561750 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:06:48.564847 master-0 kubenswrapper[17411]: I0223 13:06:48.564795 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-serving-cert\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.579771 master-0 kubenswrapper[17411]: I0223 13:06:48.579706 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:06:48.587515 master-0 kubenswrapper[17411]: I0223 13:06:48.587439 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-service-ca\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:48.600666 master-0 kubenswrapper[17411]: I0223 13:06:48.600616 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:06:48.619175 master-0 kubenswrapper[17411]: I0223 13:06:48.619093 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:06:48.625970 master-0 kubenswrapper[17411]: I0223 13:06:48.625907 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:48.640190 master-0 kubenswrapper[17411]: I0223 13:06:48.640099 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:06:48.659875 master-0 kubenswrapper[17411]: I0223 13:06:48.659794 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-n8vwz" Feb 23 13:06:48.679928 master-0 kubenswrapper[17411]: I0223 13:06:48.679851 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:06:48.690044 master-0 kubenswrapper[17411]: I0223 13:06:48.689967 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:48.699675 master-0 kubenswrapper[17411]: I0223 13:06:48.699614 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:06:48.728428 master-0 kubenswrapper[17411]: I0223 13:06:48.728349 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:06:48.738028 master-0 kubenswrapper[17411]: I0223 13:06:48.737955 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:48.739610 master-0 kubenswrapper[17411]: I0223 13:06:48.739221 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:06:48.751065 master-0 kubenswrapper[17411]: I0223 13:06:48.747773 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:48.762263 master-0 kubenswrapper[17411]: I0223 13:06:48.762015 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:06:48.782866 master-0 kubenswrapper[17411]: I0223 13:06:48.782658 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:06:48.800182 master-0 kubenswrapper[17411]: I0223 13:06:48.800118 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wt8dr" Feb 23 13:06:48.819475 master-0 kubenswrapper[17411]: I0223 13:06:48.819394 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:06:48.825262 master-0 kubenswrapper[17411]: I0223 13:06:48.825181 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:48.839948 master-0 kubenswrapper[17411]: I0223 13:06:48.839896 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:06:48.846761 master-0 kubenswrapper[17411]: I0223 13:06:48.846720 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:48.857759 master-0 kubenswrapper[17411]: I0223 13:06:48.857705 17411 request.go:700] Waited for 1.017160946s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Feb 23 13:06:48.861078 master-0 kubenswrapper[17411]: I0223 13:06:48.861011 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:06:48.872342 master-0 kubenswrapper[17411]: I0223 13:06:48.871311 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bc22782-a369-48aa-a0e8-c1c63ffa3053-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:06:48.881257 master-0 kubenswrapper[17411]: I0223 13:06:48.880965 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxjzf" Feb 23 13:06:48.886536 master-0 kubenswrapper[17411]: I0223 13:06:48.886470 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:06:48.899355 master-0 kubenswrapper[17411]: I0223 13:06:48.899283 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:06:48.919518 master-0 kubenswrapper[17411]: I0223 13:06:48.919443 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-zmzm6" Feb 23 13:06:48.941785 master-0 kubenswrapper[17411]: I0223 13:06:48.941715 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:06:48.952558 master-0 kubenswrapper[17411]: I0223 13:06:48.952481 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32952be-0fe3-431f-aa8f-6a35159fa845-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:48.979491 master-0 kubenswrapper[17411]: I0223 13:06:48.979415 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:06:48.987375 master-0 kubenswrapper[17411]: I0223 13:06:48.986068 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:06:48.989854 master-0 kubenswrapper[17411]: E0223 13:06:48.989803 17411 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.989984 master-0 kubenswrapper[17411]: E0223 13:06:48.989910 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config podName:16898873-740b-4b85-99cf-d25a28d4ab00 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.489881431 +0000 UTC m=+2.917388038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config") pod "cluster-baremetal-operator-d6bb9bb76-8mxs2" (UID: "16898873-740b-4b85-99cf-d25a28d4ab00") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.990214 master-0 kubenswrapper[17411]: E0223 13:06:48.990179 17411 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.990335 master-0 kubenswrapper[17411]: E0223 13:06:48.990232 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls podName:16898873-740b-4b85-99cf-d25a28d4ab00 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.49021984 +0000 UTC m=+2.917726447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-8mxs2" (UID: "16898873-740b-4b85-99cf-d25a28d4ab00") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.990335 master-0 kubenswrapper[17411]: E0223 13:06:48.990298 17411 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.990460 master-0 kubenswrapper[17411]: E0223 13:06:48.990337 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.490325863 +0000 UTC m=+2.917832470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.991658 master-0 kubenswrapper[17411]: E0223 13:06:48.991583 17411 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.991790 master-0 kubenswrapper[17411]: E0223 13:06:48.991759 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls podName:0d7283ee-8959-44b6-83fb-b152510485eb nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.491718312 +0000 UTC m=+2.919224989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" (UID: "0d7283ee-8959-44b6-83fb-b152510485eb") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.991889 master-0 kubenswrapper[17411]: E0223 13:06:48.991863 17411 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.991958 master-0 kubenswrapper[17411]: E0223 13:06:48.991917 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle podName:70ccda5f-ca1a-4fce-b77f-a1132f85635a nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.491906187 +0000 UTC m=+2.919412944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle") pod "insights-operator-59b498fcfb-xltpx" (UID: "70ccda5f-ca1a-4fce-b77f-a1132f85635a") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993491 17411 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993565 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images podName:8db940c1-82ba-4b6e-8137-059e26ab1ced nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.493546654 +0000 UTC m=+2.921053331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images") pod "machine-api-operator-5c7cf458b4-zkmdz" (UID: "8db940c1-82ba-4b6e-8137-059e26ab1ced") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993610 17411 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993647 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.493635586 +0000 UTC m=+2.921142293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993684 17411 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.994354 master-0 kubenswrapper[17411]: E0223 13:06:48.993719 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config podName:0d7283ee-8959-44b6-83fb-b152510485eb nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.493708928 +0000 UTC m=+2.921215645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" (UID: "0d7283ee-8959-44b6-83fb-b152510485eb") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:48.995999 master-0 kubenswrapper[17411]: E0223 13:06:48.995943 17411 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:48.996163 master-0 kubenswrapper[17411]: E0223 13:06:48.996016 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert podName:54411ade-3383-48aa-ba10-62ffb40185b9 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.496003283 +0000 UTC m=+2.923509880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert") pod "packageserver-548fc9dc5-x4nbx" (UID: "54411ade-3383-48aa-ba10-62ffb40185b9") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.002499 master-0 kubenswrapper[17411]: E0223 13:06:49.002435 17411 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002533 17411 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002619 17411 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002542 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images podName:0d7283ee-8959-44b6-83fb-b152510485eb nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.502521496 +0000 UTC m=+2.930028093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" (UID: "0d7283ee-8959-44b6-83fb-b152510485eb") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002672 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert podName:70ccda5f-ca1a-4fce-b77f-a1132f85635a nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.502650809 +0000 UTC m=+2.930157416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert") pod "insights-operator-59b498fcfb-xltpx" (UID: "70ccda5f-ca1a-4fce-b77f-a1132f85635a") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002689 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls podName:0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.50268099 +0000 UTC m=+2.930187597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-ldgbf" (UID: "0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002709 17411 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.002753 master-0 kubenswrapper[17411]: E0223 13:06:49.002743 17411 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002786 17411 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002820 17411 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002849 17411 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002877 17411 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002906 17411 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002939 17411 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002944 17411 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002752 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images podName:c33f208a-e158-47e2-83d5-ac792bf3a1d5 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.502743982 +0000 UTC m=+2.930250589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images") pod "machine-config-operator-7f8c75f984-82h6s" (UID: "c33f208a-e158-47e2-83d5-ac792bf3a1d5") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.002983 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images podName:16898873-740b-4b85-99cf-d25a28d4ab00 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.502974888 +0000 UTC m=+2.930481485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images") pod "cluster-baremetal-operator-d6bb9bb76-8mxs2" (UID: "16898873-740b-4b85-99cf-d25a28d4ab00") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003001 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config podName:430cb782-18d5-4429-99ef-29d3dca0d803 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.502991459 +0000 UTC m=+2.930498056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-48xpf" (UID: "430cb782-18d5-4429-99ef-29d3dca0d803") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003016 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls podName:8db940c1-82ba-4b6e-8137-059e26ab1ced nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503009619 +0000 UTC m=+2.930516216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-zkmdz" (UID: "8db940c1-82ba-4b6e-8137-059e26ab1ced") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003028 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls podName:430cb782-18d5-4429-99ef-29d3dca0d803 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.50302248 +0000 UTC m=+2.930529077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-48xpf" (UID: "430cb782-18d5-4429-99ef-29d3dca0d803") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003038 17411 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003041 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config podName:3d85c030-4931-42d7-afd6-72b41789aea8 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.50303534 +0000 UTC m=+2.930541937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-6b92p" (UID: "3d85c030-4931-42d7-afd6-72b41789aea8") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003057 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert podName:3d85c030-4931-42d7-afd6-72b41789aea8 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503052041 +0000 UTC m=+2.930558638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert") pod "cluster-autoscaler-operator-86b8dc6d6-6b92p" (UID: "3d85c030-4931-42d7-afd6-72b41789aea8") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003069 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config podName:8db940c1-82ba-4b6e-8137-059e26ab1ced nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503063311 +0000 UTC m=+2.930569898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config") pod "machine-api-operator-5c7cf458b4-zkmdz" (UID: "8db940c1-82ba-4b6e-8137-059e26ab1ced") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003082 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle podName:70ccda5f-ca1a-4fce-b77f-a1132f85635a nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503076071 +0000 UTC m=+2.930582668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle") pod "insights-operator-59b498fcfb-xltpx" (UID: "70ccda5f-ca1a-4fce-b77f-a1132f85635a") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003096 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert podName:54411ade-3383-48aa-ba10-62ffb40185b9 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503088572 +0000 UTC m=+2.930595169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert") pod "packageserver-548fc9dc5-x4nbx" (UID: "54411ade-3383-48aa-ba10-62ffb40185b9") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003152 17411 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.003134 master-0 kubenswrapper[17411]: E0223 13:06:49.003165 17411 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003189 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert podName:f88d6ed3-c0a6-4eef-b80c-417994cf69b0 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503176404 +0000 UTC m=+2.930683011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f94476f49-ck859" (UID: "f88d6ed3-c0a6-4eef-b80c-417994cf69b0") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003206 17411 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003208 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config podName:430cb782-18d5-4429-99ef-29d3dca0d803 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503200895 +0000 UTC m=+2.930707502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config") pod "machine-approver-7dd9c7d7b9-48xpf" (UID: "430cb782-18d5-4429-99ef-29d3dca0d803") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003273 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert podName:16898873-740b-4b85-99cf-d25a28d4ab00 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503238246 +0000 UTC m=+2.930744863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert") pod "cluster-baremetal-operator-d6bb9bb76-8mxs2" (UID: "16898873-740b-4b85-99cf-d25a28d4ab00") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003311 17411 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003338 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config podName:18b48459-51ad-4b0d-8608-4ba6d3fa8e16 nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503330828 +0000 UTC m=+2.930837435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config") pod "controller-manager-59947b7887-xg2ln" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003195 17411 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: E0223 13:06:49.003367 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert podName:d91fa6bb-0c88-4930-884a-67e840d58a9f nodeName:}" failed. No retries permitted until 2026-02-23 13:06:49.503361389 +0000 UTC m=+2.930867996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert") pod "catalog-operator-596f79dd6f-mjhwm" (UID: "d91fa6bb-0c88-4930-884a-67e840d58a9f") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:06:49.004022 master-0 kubenswrapper[17411]: I0223 13:06:49.003622 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d32952be-0fe3-431f-aa8f-6a35159fa845-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:49.010293 master-0 kubenswrapper[17411]: I0223 13:06:49.005154 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:06:49.020793 master-0 kubenswrapper[17411]: I0223 13:06:49.020754 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:06:49.039094 master-0 kubenswrapper[17411]: I0223 13:06:49.038981 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-4q8qn" Feb 23 13:06:49.060007 master-0 kubenswrapper[17411]: I0223 13:06:49.059935 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:06:49.079111 master-0 kubenswrapper[17411]: I0223 13:06:49.079063 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:06:49.099202 master-0 kubenswrapper[17411]: I0223 13:06:49.099154 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:06:49.138831 master-0 kubenswrapper[17411]: I0223 13:06:49.138776 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:06:49.158771 master-0 kubenswrapper[17411]: I0223 13:06:49.158733 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:06:49.178936 master-0 kubenswrapper[17411]: I0223 13:06:49.178893 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:06:49.198545 master-0 kubenswrapper[17411]: I0223 13:06:49.198506 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zmw9t" Feb 23 13:06:49.218704 master-0 kubenswrapper[17411]: I0223 13:06:49.218645 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:06:49.238541 master-0 kubenswrapper[17411]: I0223 13:06:49.238495 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-h78lq" Feb 23 13:06:49.259047 master-0 kubenswrapper[17411]: I0223 13:06:49.259013 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:06:49.279010 master-0 kubenswrapper[17411]: I0223 13:06:49.278971 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-dldvx" Feb 23 13:06:49.298899 master-0 kubenswrapper[17411]: I0223 13:06:49.298792 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:06:49.319318 master-0 kubenswrapper[17411]: I0223 13:06:49.319236 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-f5gf8" Feb 23 13:06:49.338573 master-0 kubenswrapper[17411]: I0223 13:06:49.338484 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:06:49.359359 master-0 kubenswrapper[17411]: I0223 13:06:49.359312 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:06:49.380463 master-0 kubenswrapper[17411]: I0223 13:06:49.380403 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:06:49.400708 master-0 kubenswrapper[17411]: I0223 13:06:49.400643 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-5499c" Feb 23 13:06:49.420426 master-0 kubenswrapper[17411]: I0223 13:06:49.420362 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:06:49.439779 master-0 kubenswrapper[17411]: I0223 13:06:49.439718 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:06:49.459112 master-0 kubenswrapper[17411]: I0223 13:06:49.458999 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:06:49.479347 master-0 kubenswrapper[17411]: I0223 13:06:49.479084 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:06:49.500933 master-0 kubenswrapper[17411]: I0223 13:06:49.500858 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:06:49.522725 master-0 kubenswrapper[17411]: I0223 13:06:49.522616 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-wbd45" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.545714 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.545817 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.545876 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.545974 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546037 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546120 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546174 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546273 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546348 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546419 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546457 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546499 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546637 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546729 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546778 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546814 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546848 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546907 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.546986 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.547055 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.547090 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.547124 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.547183 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:49.547270 master-0 kubenswrapper[17411]: I0223 13:06:49.547277 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.547987 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33f208a-e158-47e2-83d5-ac792bf3a1d5-proxy-tls\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.548015 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d85c030-4931-42d7-afd6-72b41789aea8-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.548174 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.548284 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.548381 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:49.548670 master-0 kubenswrapper[17411]: I0223 13:06:49.548404 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-images\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.549012 master-0 kubenswrapper[17411]: I0223 13:06:49.548765 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16898873-740b-4b85-99cf-d25a28d4ab00-config\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.549012 master-0 kubenswrapper[17411]: I0223 13:06:49.548774 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d91fa6bb-0c88-4930-884a-67e840d58a9f-srv-cert\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:49.549012 master-0 kubenswrapper[17411]: I0223 13:06:49.548995 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-images\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.549170 master-0 kubenswrapper[17411]: I0223 13:06:49.549093 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d85c030-4931-42d7-afd6-72b41789aea8-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.549339 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.549641 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.549921 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.549951 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/16898873-740b-4b85-99cf-d25a28d4ab00-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.550143 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33f208a-e158-47e2-83d5-ac792bf3a1d5-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.550387 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.555838 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.558832 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:06:49.559421 master-0 kubenswrapper[17411]: I0223 13:06:49.559066 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.560848 master-0 kubenswrapper[17411]: I0223 13:06:49.560499 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ccda5f-ca1a-4fce-b77f-a1132f85635a-service-ca-bundle\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.579604 master-0 kubenswrapper[17411]: I0223 13:06:49.579559 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:06:49.588192 master-0 kubenswrapper[17411]: I0223 13:06:49.588145 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70ccda5f-ca1a-4fce-b77f-a1132f85635a-serving-cert\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:49.601285 master-0 kubenswrapper[17411]: I0223 13:06:49.600506 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:06:49.618830 master-0 kubenswrapper[17411]: I0223 13:06:49.618789 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:06:49.651443 master-0 kubenswrapper[17411]: I0223 13:06:49.651388 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a80d5ac-27ce-4ba9-809e-28c86b80163b-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-j97h8\" (UID: \"0a80d5ac-27ce-4ba9-809e-28c86b80163b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-j97h8" Feb 23 13:06:49.685026 master-0 kubenswrapper[17411]: I0223 13:06:49.684953 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjthf\" (UniqueName: \"kubernetes.io/projected/08577c3c-73d8-47f4-ba30-aec11af51d40-kube-api-access-xjthf\") pod \"dns-operator-8c7d49845-7466r\" (UID: \"08577c3c-73d8-47f4-ba30-aec11af51d40\") " pod="openshift-dns-operator/dns-operator-8c7d49845-7466r" Feb 23 13:06:49.702031 master-0 kubenswrapper[17411]: I0223 13:06:49.701969 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv5f\" (UniqueName: \"kubernetes.io/projected/a3dfb271-a659-45e0-b51d-5e99ec43b555-kube-api-access-nmv5f\") pod \"cluster-node-tuning-operator-bcf775fc9-6llwl\" (UID: \"a3dfb271-a659-45e0-b51d-5e99ec43b555\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" Feb 23 13:06:49.711833 master-0 kubenswrapper[17411]: I0223 13:06:49.711761 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jg7c\" (UniqueName: \"kubernetes.io/projected/65ddfc68-2612-42b6-ad11-6fe44f1cff60-kube-api-access-8jg7c\") pod \"multus-additional-cni-plugins-f7cf9\" (UID: \"65ddfc68-2612-42b6-ad11-6fe44f1cff60\") " pod="openshift-multus/multus-additional-cni-plugins-f7cf9" Feb 23 13:06:49.720799 master-0 kubenswrapper[17411]: I0223 13:06:49.720741 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:06:49.730974 master-0 kubenswrapper[17411]: I0223 13:06:49.730903 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:49.758822 master-0 kubenswrapper[17411]: I0223 13:06:49.758766 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8l8f\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-kube-api-access-r8l8f\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:49.785404 master-0 kubenswrapper[17411]: I0223 13:06:49.785318 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgkv\" (UniqueName: \"kubernetes.io/projected/cbcca259-0dbf-48ca-bf90-eec638dcdd10-kube-api-access-nhgkv\") pod \"olm-operator-5499d7f7bb-g9x74\" (UID: \"cbcca259-0dbf-48ca-bf90-eec638dcdd10\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:49.805723 master-0 kubenswrapper[17411]: I0223 13:06:49.805628 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h97\" (UniqueName: \"kubernetes.io/projected/24dab1bc-cf56-429b-93ce-911970c41b5c-kube-api-access-q7h97\") pod \"cluster-olm-operator-5bd7768f54-s8pzx\" (UID: \"24dab1bc-cf56-429b-93ce-911970c41b5c\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-s8pzx" Feb 23 13:06:49.813731 master-0 kubenswrapper[17411]: I0223 13:06:49.813683 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhrx\" (UniqueName: \"kubernetes.io/projected/3ab71705-d574-4f95-b3fc-9f7cf5e8a557-kube-api-access-rrhrx\") pod \"kube-storage-version-migrator-operator-fc889cfd5-ccvpn\" (UID: \"3ab71705-d574-4f95-b3fc-9f7cf5e8a557\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" Feb 23 13:06:49.818997 master-0 kubenswrapper[17411]: I0223 13:06:49.818950 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2628k" Feb 23 13:06:49.857857 master-0 kubenswrapper[17411]: I0223 13:06:49.857784 17411 request.go:700] Waited for 1.966906455s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Feb 23 13:06:49.857857 master-0 kubenswrapper[17411]: I0223 13:06:49.857841 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvr7p\" (UniqueName: \"kubernetes.io/projected/da5d5997-e45f-4858-a9a9-e880bc222caf-kube-api-access-tvr7p\") pod \"package-server-manager-5c75f78c8b-8tzms\" (UID: \"da5d5997-e45f-4858-a9a9-e880bc222caf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:49.876125 master-0 kubenswrapper[17411]: I0223 13:06:49.876051 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4j2q\" (UniqueName: \"kubernetes.io/projected/c2b80534-3c9d-4ddb-9215-d50d63294c7c-kube-api-access-l4j2q\") pod \"openshift-config-operator-6f47d587d6-p5488\" (UID: \"c2b80534-3c9d-4ddb-9215-d50d63294c7c\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:49.895018 master-0 kubenswrapper[17411]: I0223 13:06:49.894964 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9fr\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-kube-api-access-tz9fr\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:49.898902 master-0 kubenswrapper[17411]: I0223 13:06:49.898863 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r" Feb 23 13:06:49.919908 master-0 kubenswrapper[17411]: I0223 13:06:49.919860 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-977zq" Feb 23 13:06:49.940666 master-0 kubenswrapper[17411]: I0223 13:06:49.940619 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:06:49.950867 master-0 kubenswrapper[17411]: I0223 13:06:49.950808 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/430cb782-18d5-4429-99ef-29d3dca0d803-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:49.960684 master-0 kubenswrapper[17411]: I0223 13:06:49.960626 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:06:49.979515 master-0 kubenswrapper[17411]: I0223 13:06:49.979462 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:06:49.988484 master-0 kubenswrapper[17411]: I0223 13:06:49.988442 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:50.010165 master-0 kubenswrapper[17411]: I0223 13:06:50.010108 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a406f63-eeeb-4da3-a1d0-86b5ab5d802c-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-7rb6v\" (UID: \"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" Feb 23 13:06:50.021156 master-0 kubenswrapper[17411]: I0223 13:06:50.021124 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:06:50.028684 master-0 kubenswrapper[17411]: I0223 13:06:50.028636 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-apiservice-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:50.029090 master-0 kubenswrapper[17411]: I0223 13:06:50.029043 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54411ade-3383-48aa-ba10-62ffb40185b9-webhook-cert\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:50.058944 master-0 kubenswrapper[17411]: I0223 13:06:50.058779 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4dmq5" Feb 23 13:06:50.070727 master-0 kubenswrapper[17411]: I0223 13:06:50.070672 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfrht\" (UniqueName: \"kubernetes.io/projected/b7585f9f-12e5-451b-beeb-db43ae778f25-kube-api-access-qfrht\") pod \"csi-snapshot-controller-operator-6fb4df594f-sx924\" (UID: \"b7585f9f-12e5-451b-beeb-db43ae778f25\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" Feb 23 13:06:50.071175 master-0 kubenswrapper[17411]: I0223 13:06:50.071105 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:50.079552 master-0 kubenswrapper[17411]: I0223 13:06:50.079507 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:06:50.080678 master-0 kubenswrapper[17411]: I0223 13:06:50.080631 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430cb782-18d5-4429-99ef-29d3dca0d803-config\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:50.100045 master-0 kubenswrapper[17411]: I0223 13:06:50.099038 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:06:50.119351 master-0 kubenswrapper[17411]: I0223 13:06:50.119299 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:06:50.119716 master-0 kubenswrapper[17411]: I0223 13:06:50.119678 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d7283ee-8959-44b6-83fb-b152510485eb-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:50.145562 master-0 kubenswrapper[17411]: I0223 13:06:50.141559 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:06:50.170261 master-0 kubenswrapper[17411]: I0223 13:06:50.163393 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:06:50.170261 master-0 kubenswrapper[17411]: I0223 13:06:50.168795 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d7283ee-8959-44b6-83fb-b152510485eb-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:50.182259 master-0 kubenswrapper[17411]: I0223 13:06:50.179582 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lp4jk" Feb 23 13:06:50.204258 master-0 kubenswrapper[17411]: I0223 13:06:50.199741 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:06:50.219392 master-0 kubenswrapper[17411]: I0223 13:06:50.219354 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-vhrrg" Feb 23 13:06:50.239907 master-0 kubenswrapper[17411]: I0223 13:06:50.237954 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:06:50.240579 master-0 kubenswrapper[17411]: I0223 13:06:50.240544 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8db940c1-82ba-4b6e-8137-059e26ab1ced-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:50.276831 master-0 kubenswrapper[17411]: I0223 13:06:50.276768 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:06:50.277668 master-0 kubenswrapper[17411]: I0223 13:06:50.277621 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-config\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:50.279170 master-0 kubenswrapper[17411]: I0223 13:06:50.279129 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-n9dxs" Feb 23 13:06:50.305747 master-0 kubenswrapper[17411]: I0223 13:06:50.299440 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:06:50.305747 master-0 kubenswrapper[17411]: I0223 13:06:50.301527 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8db940c1-82ba-4b6e-8137-059e26ab1ced-images\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:50.341895 master-0 kubenswrapper[17411]: I0223 13:06:50.339487 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1970ec8-620e-4529-bf3b-1cf9a52c27d3-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-jpf5n\" (UID: \"b1970ec8-620e-4529-bf3b-1cf9a52c27d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" Feb 23 13:06:50.369995 master-0 kubenswrapper[17411]: I0223 13:06:50.369948 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9nl\" (UniqueName: \"kubernetes.io/projected/c0b59f2a-7014-448c-9d3b-e38281f07dbc-kube-api-access-nt9nl\") pod \"multus-rmz8z\" (UID: \"c0b59f2a-7014-448c-9d3b-e38281f07dbc\") " pod="openshift-multus/multus-rmz8z" Feb 23 13:06:50.416734 master-0 kubenswrapper[17411]: I0223 13:06:50.416667 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csk2\" (UniqueName: \"kubernetes.io/projected/25b5540c-da7d-4b6f-a15f-394451f4674e-kube-api-access-2csk2\") pod \"service-ca-operator-c48c8bf7c-rvccp\" (UID: \"25b5540c-da7d-4b6f-a15f-394451f4674e\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" Feb 23 13:06:50.485088 master-0 kubenswrapper[17411]: I0223 13:06:50.485033 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h6l\" (UniqueName: \"kubernetes.io/projected/99399ebb-c95f-4663-b3b6-f5dfabf47fcf-kube-api-access-p4h6l\") pod \"openshift-controller-manager-operator-584cc7bcb5-t9gx8\" (UID: \"99399ebb-c95f-4663-b3b6-f5dfabf47fcf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" Feb 23 13:06:50.487644 master-0 kubenswrapper[17411]: I0223 13:06:50.487609 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6rg\" (UniqueName: \"kubernetes.io/projected/f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8-kube-api-access-gr6rg\") pod \"authentication-operator-5bd7c86784-ld4gj\" (UID: \"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:06:50.489645 master-0 kubenswrapper[17411]: I0223 13:06:50.489608 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a4b185e-17da-4711-a7b2-c2a9e1cd7b30-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-dgldn\" (UID: \"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" Feb 23 13:06:50.493428 master-0 kubenswrapper[17411]: I0223 13:06:50.493386 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw9r\" (UniqueName: \"kubernetes.io/projected/ae1799b6-85b0-4aed-8835-35cb3d8d1109-kube-api-access-lmw9r\") pod \"openshift-apiserver-operator-8586dccc9b-6wk86\" (UID: \"ae1799b6-85b0-4aed-8835-35cb3d8d1109\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" Feb 23 13:06:50.538593 master-0 kubenswrapper[17411]: I0223 13:06:50.536345 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd03d6e-4c8c-400a-8001-343aaeeca93b-bound-sa-token\") pod \"ingress-operator-6569778c84-gswst\" (UID: \"dcd03d6e-4c8c-400a-8001-343aaeeca93b\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-gswst" Feb 23 13:06:50.538593 master-0 kubenswrapper[17411]: I0223 13:06:50.536617 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvd2\" (UniqueName: \"kubernetes.io/projected/ee436961-c305-4c84-b4f9-175e1d8004fb-kube-api-access-ngvd2\") pod \"cluster-monitoring-operator-6bb6d78bf-b2xcd\" (UID: \"ee436961-c305-4c84-b4f9-175e1d8004fb\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-b2xcd" Feb 23 13:06:50.538593 master-0 kubenswrapper[17411]: I0223 13:06:50.538020 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppk7\" (UniqueName: \"kubernetes.io/projected/85958edf-e3da-4704-8f09-cf049101f2e6-kube-api-access-fppk7\") pod \"network-operator-7d7db75979-rmsq8\" (UID: \"85958edf-e3da-4704-8f09-cf049101f2e6\") " pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" Feb 23 13:06:50.541885 master-0 kubenswrapper[17411]: I0223 13:06:50.541374 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnn5\" (UniqueName: \"kubernetes.io/projected/03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4-kube-api-access-kdnn5\") pod \"etcd-operator-545bf96f4d-drk2j\" (UID: \"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:06:50.598186 master-0 kubenswrapper[17411]: I0223 13:06:50.598067 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cx9\" (UniqueName: \"kubernetes.io/projected/39ae352f-b9e3-4bbc-b59b-9fa92c7bc714-kube-api-access-d8cx9\") pod \"dns-default-rcn5b\" (UID: \"39ae352f-b9e3-4bbc-b59b-9fa92c7bc714\") " pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:50.598389 master-0 kubenswrapper[17411]: I0223 13:06:50.598202 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw4h\" (UniqueName: \"kubernetes.io/projected/1d953c37-1b74-4ce5-89cb-b3f53454fc57-kube-api-access-slw4h\") pod \"marketplace-operator-6f5488b997-28zcz\" (UID: \"1d953c37-1b74-4ce5-89cb-b3f53454fc57\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:50.622720 master-0 kubenswrapper[17411]: I0223 13:06:50.622662 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l49w\" (UniqueName: \"kubernetes.io/projected/c0d6008c-6e09-4e61-83a5-60456ca90e1e-kube-api-access-9l49w\") pod \"operator-controller-controller-manager-9cc7d7bb-j5hpl\" (UID: \"c0d6008c-6e09-4e61-83a5-60456ca90e1e\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:50.623081 master-0 kubenswrapper[17411]: E0223 13:06:50.623040 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 23 13:06:50.632856 master-0 kubenswrapper[17411]: I0223 13:06:50.632803 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"controller-manager-59947b7887-xg2ln\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:50.657944 master-0 kubenswrapper[17411]: I0223 13:06:50.657901 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4jv\" (UniqueName: \"kubernetes.io/projected/34ad2537-b5fe-463f-8e95-f47cc886aa5e-kube-api-access-4r4jv\") pod \"tuned-75bpf\" (UID: \"34ad2537-b5fe-463f-8e95-f47cc886aa5e\") " pod="openshift-cluster-node-tuning-operator/tuned-75bpf" Feb 23 13:06:50.699293 master-0 kubenswrapper[17411]: I0223 13:06:50.696795 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbml7\" (UniqueName: \"kubernetes.io/projected/031016de-897e-42bc-9de4-843122f64a75-kube-api-access-sbml7\") pod \"node-resolver-bq97v\" (UID: \"031016de-897e-42bc-9de4-843122f64a75\") " pod="openshift-dns/node-resolver-bq97v" Feb 23 13:06:50.715136 master-0 kubenswrapper[17411]: I0223 13:06:50.715077 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l6fp\" (UniqueName: \"kubernetes.io/projected/54411ade-3383-48aa-ba10-62ffb40185b9-kube-api-access-8l6fp\") pod \"packageserver-548fc9dc5-x4nbx\" (UID: \"54411ade-3383-48aa-ba10-62ffb40185b9\") " pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:50.812020 master-0 kubenswrapper[17411]: I0223 13:06:50.810592 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts56d\" (UniqueName: \"kubernetes.io/projected/8db940c1-82ba-4b6e-8137-059e26ab1ced-kube-api-access-ts56d\") pod \"machine-api-operator-5c7cf458b4-zkmdz\" (UID: \"8db940c1-82ba-4b6e-8137-059e26ab1ced\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" Feb 23 13:06:50.812020 master-0 kubenswrapper[17411]: I0223 13:06:50.811316 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdqd6\" (UniqueName: \"kubernetes.io/projected/f88d6ed3-c0a6-4eef-b80c-417994cf69b0-kube-api-access-xdqd6\") pod \"cluster-storage-operator-f94476f49-ck859\" (UID: \"f88d6ed3-c0a6-4eef-b80c-417994cf69b0\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" Feb 23 13:06:50.812020 master-0 kubenswrapper[17411]: I0223 13:06:50.811730 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbtg\" (UniqueName: \"kubernetes.io/projected/c33f208a-e158-47e2-83d5-ac792bf3a1d5-kube-api-access-kpbtg\") pod \"machine-config-operator-7f8c75f984-82h6s\" (UID: \"c33f208a-e158-47e2-83d5-ac792bf3a1d5\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:50.812020 master-0 kubenswrapper[17411]: I0223 13:06:50.811962 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbl2g\" (UniqueName: \"kubernetes.io/projected/c159d5f4-5c95-4600-80ec-a17a419cfd7a-kube-api-access-rbl2g\") pod \"apiserver-6dcf85cb46-cmf75\" (UID: \"c159d5f4-5c95-4600-80ec-a17a419cfd7a\") " pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:50.814636 master-0 kubenswrapper[17411]: I0223 13:06:50.814362 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65tqd\" (UniqueName: \"kubernetes.io/projected/9c3f9dc5-d10d-452c-bf5d-c5830a444617-kube-api-access-65tqd\") pod \"redhat-marketplace-r8xxs\" (UID: \"9c3f9dc5-d10d-452c-bf5d-c5830a444617\") " pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:50.817209 master-0 kubenswrapper[17411]: I0223 13:06:50.816756 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpqn\" (UniqueName: \"kubernetes.io/projected/c0520301-1a6b-49ca-acca-011692d5b784-kube-api-access-xlpqn\") pod \"apiserver-5ddfd84bb7-vhg7p\" (UID: \"c0520301-1a6b-49ca-acca-011692d5b784\") " pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:50.817488 master-0 kubenswrapper[17411]: I0223 13:06:50.817453 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwphb\" (UniqueName: \"kubernetes.io/projected/e7fbab55-8405-44f4-ae2a-412c115ce411-kube-api-access-lwphb\") pod \"network-metrics-daemon-kq2rk\" (UID: \"e7fbab55-8405-44f4-ae2a-412c115ce411\") " pod="openshift-multus/network-metrics-daemon-kq2rk" Feb 23 13:06:50.836993 master-0 kubenswrapper[17411]: I0223 13:06:50.834535 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24gm8\" (UniqueName: \"kubernetes.io/projected/430cb782-18d5-4429-99ef-29d3dca0d803-kube-api-access-24gm8\") pod \"machine-approver-7dd9c7d7b9-48xpf\" (UID: \"430cb782-18d5-4429-99ef-29d3dca0d803\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" Feb 23 13:06:50.846715 master-0 kubenswrapper[17411]: I0223 13:06:50.846667 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" Feb 23 13:06:50.858457 master-0 kubenswrapper[17411]: I0223 13:06:50.858355 17411 request.go:700] Waited for 2.86376074s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 23 13:06:50.971268 master-0 kubenswrapper[17411]: I0223 13:06:50.970039 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8s7\" (UniqueName: \"kubernetes.io/projected/71a07622-3038-4b8c-b6bb-5f28a4115012-kube-api-access-6r8s7\") pod \"service-ca-576b4d78bd-nds57\" (UID: \"71a07622-3038-4b8c-b6bb-5f28a4115012\") " pod="openshift-service-ca/service-ca-576b4d78bd-nds57" Feb 23 13:06:50.971268 master-0 kubenswrapper[17411]: I0223 13:06:50.970258 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsvs\" (UniqueName: \"kubernetes.io/projected/bfbb4d6d-7047-48cb-be03-97a57fc688e3-kube-api-access-rqsvs\") pod \"catalogd-controller-manager-84b8d9d697-bckd6\" (UID: \"bfbb4d6d-7047-48cb-be03-97a57fc688e3\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:50.971268 master-0 kubenswrapper[17411]: I0223 13:06:50.970686 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plz5n\" (UniqueName: \"kubernetes.io/projected/048f4455-d99a-407b-8674-60efc7aa6ecb-kube-api-access-plz5n\") pod \"iptables-alerter-qd2ns\" (UID: \"048f4455-d99a-407b-8674-60efc7aa6ecb\") " pod="openshift-network-operator/iptables-alerter-qd2ns" Feb 23 13:06:50.971268 master-0 kubenswrapper[17411]: I0223 13:06:50.971023 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"route-controller-manager-64ccc6b554-znpw2\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:51.032471 master-0 kubenswrapper[17411]: I0223 13:06:51.032422 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsp9d\" (UniqueName: \"kubernetes.io/projected/b4c51b25-f013-4f5c-acbd-598350468192-kube-api-access-fsp9d\") pod \"ovnkube-control-plane-5d8dfcdc87-8mw8h\" (UID: \"b4c51b25-f013-4f5c-acbd-598350468192\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" Feb 23 13:06:51.033577 master-0 kubenswrapper[17411]: I0223 13:06:51.033553 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzwh\" (UniqueName: \"kubernetes.io/projected/29908b4a-0df5-4c46-b886-c968976c25fb-kube-api-access-dbzwh\") pod \"community-operators-mldw4\" (UID: \"29908b4a-0df5-4c46-b886-c968976c25fb\") " pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:51.116850 master-0 kubenswrapper[17411]: I0223 13:06:51.116804 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlwn\" (UniqueName: \"kubernetes.io/projected/0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab-kube-api-access-8hlwn\") pod \"cluster-samples-operator-65c5c48b9b-ldgbf\" (UID: \"0e9742a8-81c2-4d17-8ed4-6ca0cd3747ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-ldgbf" Feb 23 13:06:51.118016 master-0 kubenswrapper[17411]: I0223 13:06:51.117986 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"multus-admission-controller-5f98f4f8d5-8hstp\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:06:51.120275 master-0 kubenswrapper[17411]: I0223 13:06:51.120230 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2s4f\" (UniqueName: \"kubernetes.io/projected/0128982b-01b4-49cb-ab4a-8759b844c86b-kube-api-access-b2s4f\") pod \"certified-operators-sfrhg\" (UID: \"0128982b-01b4-49cb-ab4a-8759b844c86b\") " pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:51.122130 master-0 kubenswrapper[17411]: I0223 13:06:51.122092 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgsw\" (UniqueName: \"kubernetes.io/projected/0d7283ee-8959-44b6-83fb-b152510485eb-kube-api-access-hpgsw\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f\" (UID: \"0d7283ee-8959-44b6-83fb-b152510485eb\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" Feb 23 13:06:51.127067 master-0 kubenswrapper[17411]: I0223 13:06:51.127021 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zs2l\" (UniqueName: \"kubernetes.io/projected/d32952be-0fe3-431f-aa8f-6a35159fa845-kube-api-access-5zs2l\") pod \"cloud-credential-operator-6968c58f46-gss4v\" (UID: \"d32952be-0fe3-431f-aa8f-6a35159fa845\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" Feb 23 13:06:51.133612 master-0 kubenswrapper[17411]: I0223 13:06:51.133566 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt2t\" (UniqueName: \"kubernetes.io/projected/3d82f223-e28b-4917-8513-3ca5c6e9bff7-kube-api-access-crt2t\") pod \"network-node-identity-4wvxd\" (UID: \"3d82f223-e28b-4917-8513-3ca5c6e9bff7\") " pod="openshift-network-node-identity/network-node-identity-4wvxd" Feb 23 13:06:51.145175 master-0 kubenswrapper[17411]: I0223 13:06:51.145130 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmk8\" (UniqueName: \"kubernetes.io/projected/16898873-740b-4b85-99cf-d25a28d4ab00-kube-api-access-xhmk8\") pod \"cluster-baremetal-operator-d6bb9bb76-8mxs2\" (UID: \"16898873-740b-4b85-99cf-d25a28d4ab00\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" Feb 23 13:06:51.145947 master-0 kubenswrapper[17411]: I0223 13:06:51.145907 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdtv\" (UniqueName: \"kubernetes.io/projected/70ccda5f-ca1a-4fce-b77f-a1132f85635a-kube-api-access-mwdtv\") pod \"insights-operator-59b498fcfb-xltpx\" (UID: \"70ccda5f-ca1a-4fce-b77f-a1132f85635a\") " pod="openshift-insights/insights-operator-59b498fcfb-xltpx" Feb 23 13:06:51.158322 master-0 kubenswrapper[17411]: I0223 13:06:51.158230 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7b9\" (UniqueName: \"kubernetes.io/projected/ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2-kube-api-access-7v7b9\") pod \"ovnkube-node-45ncb\" (UID: \"ffc2e8a2-ea4d-4d8d-9bdf-5127a8d717c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:51.158548 master-0 kubenswrapper[17411]: I0223 13:06:51.158368 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f6sq\" (UniqueName: \"kubernetes.io/projected/ae5c9120-c38d-46c0-af43-9275563b1a67-kube-api-access-8f6sq\") pod \"migrator-5c85bff57-xj4vr\" (UID: \"ae5c9120-c38d-46c0-af43-9275563b1a67\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-xj4vr" Feb 23 13:06:51.184891 master-0 kubenswrapper[17411]: I0223 13:06:51.184837 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc576a63-0ea6-40c8-90bc-c44b5dc95ecd-kube-api-access\") pod \"cluster-version-operator-57476485-j4p78\" (UID: \"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd\") " pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" Feb 23 13:06:51.196473 master-0 kubenswrapper[17411]: I0223 13:06:51.196421 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2857n\" (UniqueName: \"kubernetes.io/projected/d91fa6bb-0c88-4930-884a-67e840d58a9f-kube-api-access-2857n\") pod \"catalog-operator-596f79dd6f-mjhwm\" (UID: \"d91fa6bb-0c88-4930-884a-67e840d58a9f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:51.261689 master-0 kubenswrapper[17411]: I0223 13:06:51.261529 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cgc\" (UniqueName: \"kubernetes.io/projected/d0c7587b-eea6-4d98-b39d-3a0feba4035d-kube-api-access-q2cgc\") pod \"network-check-target-shl6r\" (UID: \"d0c7587b-eea6-4d98-b39d-3a0feba4035d\") " pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:06:51.262276 master-0 kubenswrapper[17411]: I0223 13:06:51.262212 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhl9t\" (UniqueName: \"kubernetes.io/projected/3d85c030-4931-42d7-afd6-72b41789aea8-kube-api-access-zhl9t\") pod \"cluster-autoscaler-operator-86b8dc6d6-6b92p\" (UID: \"3d85c030-4931-42d7-afd6-72b41789aea8\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" Feb 23 13:06:51.267677 master-0 kubenswrapper[17411]: I0223 13:06:51.267531 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265wg\" (UniqueName: \"kubernetes.io/projected/4bc22782-a369-48aa-a0e8-c1c63ffa3053-kube-api-access-265wg\") pod \"control-plane-machine-set-operator-686847ff5f-rvz4w\" (UID: \"4bc22782-a369-48aa-a0e8-c1c63ffa3053\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" Feb 23 13:06:51.279830 master-0 kubenswrapper[17411]: I0223 13:06:51.279765 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgmq\" (UniqueName: \"kubernetes.io/projected/4e6bc033-cd90-4704-b03a-8e9c6c0d3904-kube-api-access-2tgmq\") pod \"csi-snapshot-controller-6847bb4785-hgkrm\" (UID: \"4e6bc033-cd90-4704-b03a-8e9c6c0d3904\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" Feb 23 13:06:51.299759 master-0 kubenswrapper[17411]: I0223 13:06:51.299701 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj957\" (UniqueName: \"kubernetes.io/projected/b48d5b87-189b-45b6-ba55-37bd22d59eb6-kube-api-access-nj957\") pod \"redhat-operators-bxqsd\" (UID: \"b48d5b87-189b-45b6-ba55-37bd22d59eb6\") " pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:51.353208 master-0 kubenswrapper[17411]: E0223 13:06:51.353052 17411 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.485s" Feb 23 13:06:51.353208 master-0 kubenswrapper[17411]: I0223 13:06:51.353128 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:51.353208 master-0 kubenswrapper[17411]: I0223 13:06:51.353200 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:51.353497 master-0 kubenswrapper[17411]: I0223 13:06:51.353390 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:06:51.353571 master-0 kubenswrapper[17411]: I0223 13:06:51.353547 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:51.354204 master-0 kubenswrapper[17411]: I0223 13:06:51.354167 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" event={"ID":"c33f208a-e158-47e2-83d5-ac792bf3a1d5","Type":"ContainerStarted","Data":"f20870fedd39a5fcac2849dfe260df528edaaae565ef9981e8dd778b3bbb8634"} Feb 23 13:06:51.354291 master-0 kubenswrapper[17411]: I0223 13:06:51.354211 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" event={"ID":"c33f208a-e158-47e2-83d5-ac792bf3a1d5","Type":"ContainerStarted","Data":"691aedbd28a747f226bebdd350428eca31ef9a07fa5127fd9ae499bd323b6128"} Feb 23 13:06:51.362365 master-0 kubenswrapper[17411]: I0223 13:06:51.362324 17411 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 23 13:06:51.362547 master-0 kubenswrapper[17411]: I0223 13:06:51.362409 17411 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 23 13:06:51.371016 master-0 kubenswrapper[17411]: I0223 13:06:51.369401 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 23 13:06:51.404718 master-0 kubenswrapper[17411]: I0223 13:06:51.404656 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:51.404718 master-0 kubenswrapper[17411]: I0223 13:06:51.404719 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404817 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404893 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-g9x74" Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404909 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s"] Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404925 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404936 17411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="9d320f59-640e-49f3-a17f-a4b8ea733d23" Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404955 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 23 13:06:51.404962 master-0 kubenswrapper[17411]: I0223 13:06:51.404963 17411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="9d320f59-640e-49f3-a17f-a4b8ea733d23" Feb 23 13:06:51.405174 master-0 kubenswrapper[17411]: I0223 13:06:51.404990 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:51.405174 master-0 kubenswrapper[17411]: I0223 13:06:51.405010 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rcn5b" Feb 23 13:06:51.499429 master-0 kubenswrapper[17411]: I0223 13:06:51.499349 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:51.539957 master-0 kubenswrapper[17411]: I0223 13:06:51.539901 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:51.572650 master-0 kubenswrapper[17411]: I0223 13:06:51.572573 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:51.573573 master-0 kubenswrapper[17411]: I0223 13:06:51.573540 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:06:51.668088 master-0 kubenswrapper[17411]: I0223 13:06:51.667928 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:51.912855 master-0 kubenswrapper[17411]: I0223 13:06:51.912801 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:51.915827 master-0 kubenswrapper[17411]: I0223 13:06:51.915794 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:06:52.087872 master-0 kubenswrapper[17411]: I0223 13:06:52.087781 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:52.089409 master-0 kubenswrapper[17411]: I0223 13:06:52.089341 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" event={"ID":"c33f208a-e158-47e2-83d5-ac792bf3a1d5","Type":"ContainerStarted","Data":"8157fb00b82726235d8f632f45fd2457f1f6e06df0bea0ef3a138a41ea799a56"} Feb 23 13:06:52.089743 master-0 kubenswrapper[17411]: I0223 13:06:52.089714 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:52.238764 master-0 kubenswrapper[17411]: I0223 13:06:52.238711 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:52.251645 master-0 kubenswrapper[17411]: I0223 13:06:52.251598 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:06:52.313210 master-0 kubenswrapper[17411]: I0223 13:06:52.313148 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:52.364607 master-0 kubenswrapper[17411]: I0223 13:06:52.364443 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:52.364607 master-0 kubenswrapper[17411]: I0223 13:06:52.364599 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:52.369071 master-0 kubenswrapper[17411]: I0223 13:06:52.369022 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:52.599325 master-0 kubenswrapper[17411]: I0223 13:06:52.599237 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:52.603623 master-0 kubenswrapper[17411]: I0223 13:06:52.603565 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:06:52.758700 master-0 kubenswrapper[17411]: I0223 13:06:52.758626 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:52.768721 master-0 kubenswrapper[17411]: I0223 13:06:52.768648 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:52.961633 master-0 kubenswrapper[17411]: I0223 13:06:52.961550 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:06:52.970078 master-0 kubenswrapper[17411]: I0223 13:06:52.970006 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:53.034898 master-0 kubenswrapper[17411]: I0223 13:06:53.034699 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:53.373553 master-0 kubenswrapper[17411]: I0223 13:06:53.373427 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:53.409024 master-0 kubenswrapper[17411]: I0223 13:06:53.408944 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfrhg" Feb 23 13:06:53.449381 master-0 kubenswrapper[17411]: I0223 13:06:53.449266 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:53.489404 master-0 kubenswrapper[17411]: I0223 13:06:53.489341 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:54.468871 master-0 kubenswrapper[17411]: I0223 13:06:54.468788 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:54.629414 master-0 kubenswrapper[17411]: I0223 13:06:54.629323 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8g7hl"] Feb 23 13:06:54.629814 master-0 kubenswrapper[17411]: E0223 13:06:54.629726 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:54.629814 master-0 kubenswrapper[17411]: I0223 13:06:54.629750 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:54.629814 master-0 kubenswrapper[17411]: E0223 13:06:54.629775 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:54.629814 master-0 kubenswrapper[17411]: I0223 13:06:54.629793 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.629834 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.629857 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.629874 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e50127-3c2e-4514-ace5-2cf6f9223abf" containerName="installer" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.629891 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e50127-3c2e-4514-ace5-2cf6f9223abf" containerName="installer" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.629917 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.629936 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.629964 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.629977 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.629999 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.630015 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.630052 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.630070 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: E0223 13:06:54.630096 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:54.630118 master-0 kubenswrapper[17411]: I0223 13:06:54.630111 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: E0223 13:06:54.630145 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630165 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: E0223 13:06:54.630191 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630210 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: E0223 13:06:54.630235 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630290 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630543 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630577 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630601 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e50127-3c2e-4514-ace5-2cf6f9223abf" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630632 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f533d847-cace-4951-b6f0-f7dc82ca9454" containerName="assisted-installer-controller" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630660 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bbed42-d2a0-4d6c-a25f-0d75a37dbab0" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630698 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8a9026-ee0a-44c4-9c90-cd863f5461dd" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630727 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a14e09-67c1-45e9-af34-bccb2fe3757e" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630757 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="1860bead-61b8-4678-b583-c13c79575ef4" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630780 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630797 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5fa293-4526-4dd9-a0e4-a1db7d667092" containerName="installer" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630816 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 23 13:06:54.631032 master-0 kubenswrapper[17411]: I0223 13:06:54.630838 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 23 13:06:54.632070 master-0 kubenswrapper[17411]: I0223 13:06:54.631880 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.634519 master-0 kubenswrapper[17411]: I0223 13:06:54.634468 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:06:54.729487 master-0 kubenswrapper[17411]: I0223 13:06:54.729237 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8w6\" (UniqueName: \"kubernetes.io/projected/7ffa51ac-120f-4865-9017-ffbb36a89dd4-kube-api-access-ss8w6\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.729487 master-0 kubenswrapper[17411]: I0223 13:06:54.729361 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ffa51ac-120f-4865-9017-ffbb36a89dd4-proxy-tls\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.729487 master-0 kubenswrapper[17411]: I0223 13:06:54.729447 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7ffa51ac-120f-4865-9017-ffbb36a89dd4-rootfs\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.730126 master-0 kubenswrapper[17411]: I0223 13:06:54.729557 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ffa51ac-120f-4865-9017-ffbb36a89dd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.831589 master-0 kubenswrapper[17411]: I0223 13:06:54.831509 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8w6\" (UniqueName: \"kubernetes.io/projected/7ffa51ac-120f-4865-9017-ffbb36a89dd4-kube-api-access-ss8w6\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.831589 master-0 kubenswrapper[17411]: I0223 13:06:54.831598 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ffa51ac-120f-4865-9017-ffbb36a89dd4-proxy-tls\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.831998 master-0 kubenswrapper[17411]: I0223 13:06:54.831662 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7ffa51ac-120f-4865-9017-ffbb36a89dd4-rootfs\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.831998 master-0 kubenswrapper[17411]: I0223 13:06:54.831705 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ffa51ac-120f-4865-9017-ffbb36a89dd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.832808 master-0 kubenswrapper[17411]: I0223 13:06:54.832765 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ffa51ac-120f-4865-9017-ffbb36a89dd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.832905 master-0 kubenswrapper[17411]: I0223 13:06:54.832864 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7ffa51ac-120f-4865-9017-ffbb36a89dd4-rootfs\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.833484 master-0 kubenswrapper[17411]: I0223 13:06:54.833418 17411 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 13:06:54.843919 master-0 kubenswrapper[17411]: I0223 13:06:54.843848 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ffa51ac-120f-4865-9017-ffbb36a89dd4-proxy-tls\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.868982 master-0 kubenswrapper[17411]: I0223 13:06:54.868907 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8w6\" (UniqueName: \"kubernetes.io/projected/7ffa51ac-120f-4865-9017-ffbb36a89dd4-kube-api-access-ss8w6\") pod \"machine-config-daemon-8g7hl\" (UID: \"7ffa51ac-120f-4865-9017-ffbb36a89dd4\") " pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.955536 master-0 kubenswrapper[17411]: I0223 13:06:54.955433 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" Feb 23 13:06:54.990894 master-0 kubenswrapper[17411]: W0223 13:06:54.990824 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ffa51ac_120f_4865_9017_ffbb36a89dd4.slice/crio-83a5960b1b1d48190d128fe56f5ff7c299f493d6898a8698364167cd325c2fc4 WatchSource:0}: Error finding container 83a5960b1b1d48190d128fe56f5ff7c299f493d6898a8698364167cd325c2fc4: Status 404 returned error can't find the container with id 83a5960b1b1d48190d128fe56f5ff7c299f493d6898a8698364167cd325c2fc4 Feb 23 13:06:55.090689 master-0 kubenswrapper[17411]: I0223 13:06:55.090630 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:55.096316 master-0 kubenswrapper[17411]: I0223 13:06:55.096286 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6dcf85cb46-cmf75" Feb 23 13:06:55.100171 master-0 kubenswrapper[17411]: I0223 13:06:55.100109 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:55.108167 master-0 kubenswrapper[17411]: I0223 13:06:55.108127 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:55.110534 master-0 kubenswrapper[17411]: I0223 13:06:55.110489 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" event={"ID":"7ffa51ac-120f-4865-9017-ffbb36a89dd4","Type":"ContainerStarted","Data":"83a5960b1b1d48190d128fe56f5ff7c299f493d6898a8698364167cd325c2fc4"} Feb 23 13:06:55.117252 master-0 kubenswrapper[17411]: I0223 13:06:55.117199 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:55.139545 master-0 kubenswrapper[17411]: I0223 13:06:55.139489 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:55.143420 master-0 kubenswrapper[17411]: I0223 13:06:55.143328 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-548fc9dc5-x4nbx" Feb 23 13:06:56.123078 master-0 kubenswrapper[17411]: I0223 13:06:56.122925 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" event={"ID":"7ffa51ac-120f-4865-9017-ffbb36a89dd4","Type":"ContainerStarted","Data":"3086f2c8a37c68af5ebc2fd858c5e4c8a07dcd93801f29ade66ad0956996e3b2"} Feb 23 13:06:56.123078 master-0 kubenswrapper[17411]: I0223 13:06:56.123091 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" event={"ID":"7ffa51ac-120f-4865-9017-ffbb36a89dd4","Type":"ContainerStarted","Data":"da9b555f9c4ad03868437966a7f9ff86719418253a4699a54956336e2a6e5a14"} Feb 23 13:06:56.149431 master-0 kubenswrapper[17411]: I0223 13:06:56.149289 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8g7hl" podStartSLOduration=2.149197939 podStartE2EDuration="2.149197939s" podCreationTimestamp="2026-02-23 13:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:06:56.148077038 +0000 UTC m=+9.575583715" watchObservedRunningTime="2026-02-23 13:06:56.149197939 +0000 UTC m=+9.576704566" Feb 23 13:06:56.685043 master-0 kubenswrapper[17411]: I0223 13:06:56.684933 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:56.691944 master-0 kubenswrapper[17411]: I0223 13:06:56.691852 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-mjhwm" Feb 23 13:06:57.112313 master-0 kubenswrapper[17411]: I0223 13:06:57.112193 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:06:57.116306 master-0 kubenswrapper[17411]: I0223 13:06:57.116229 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-shl6r" Feb 23 13:06:57.124434 master-0 kubenswrapper[17411]: I0223 13:06:57.124359 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:57.265532 master-0 kubenswrapper[17411]: I0223 13:06:57.265453 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:57.270396 master-0 kubenswrapper[17411]: I0223 13:06:57.270370 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:57.320751 master-0 kubenswrapper[17411]: I0223 13:06:57.320691 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:57.326787 master-0 kubenswrapper[17411]: I0223 13:06:57.326731 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5ddfd84bb7-vhg7p" Feb 23 13:06:57.494195 master-0 kubenswrapper[17411]: I0223 13:06:57.494084 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:57.561376 master-0 kubenswrapper[17411]: I0223 13:06:57.561273 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxqsd" Feb 23 13:06:57.582139 master-0 kubenswrapper[17411]: I0223 13:06:57.581765 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:57.623331 master-0 kubenswrapper[17411]: I0223 13:06:57.623259 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:57.714464 master-0 kubenswrapper[17411]: I0223 13:06:57.714390 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:57.749372 master-0 kubenswrapper[17411]: I0223 13:06:57.749271 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mldw4" Feb 23 13:06:57.930898 master-0 kubenswrapper[17411]: I0223 13:06:57.930828 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:57.934026 master-0 kubenswrapper[17411]: I0223 13:06:57.933986 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:06:58.137270 master-0 kubenswrapper[17411]: I0223 13:06:58.137117 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:58.137270 master-0 kubenswrapper[17411]: I0223 13:06:58.137166 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:58.143759 master-0 kubenswrapper[17411]: I0223 13:06:58.143689 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:06:58.866733 master-0 kubenswrapper[17411]: I0223 13:06:58.866642 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b"] Feb 23 13:06:58.867623 master-0 kubenswrapper[17411]: I0223 13:06:58.867582 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:58.871016 master-0 kubenswrapper[17411]: I0223 13:06:58.870953 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:06:58.882213 master-0 kubenswrapper[17411]: I0223 13:06:58.882135 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b"] Feb 23 13:06:58.906276 master-0 kubenswrapper[17411]: I0223 13:06:58.906207 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2c8336c-0733-4e20-85ec-062e07b6fdc0-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:58.906421 master-0 kubenswrapper[17411]: I0223 13:06:58.906361 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mpc\" (UniqueName: \"kubernetes.io/projected/c2c8336c-0733-4e20-85ec-062e07b6fdc0-kube-api-access-z5mpc\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:58.906421 master-0 kubenswrapper[17411]: I0223 13:06:58.906401 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2c8336c-0733-4e20-85ec-062e07b6fdc0-proxy-tls\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:58.924176 master-0 kubenswrapper[17411]: I0223 13:06:58.922334 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:06:59.008024 master-0 kubenswrapper[17411]: I0223 13:06:59.007943 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mpc\" (UniqueName: \"kubernetes.io/projected/c2c8336c-0733-4e20-85ec-062e07b6fdc0-kube-api-access-z5mpc\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.008024 master-0 kubenswrapper[17411]: I0223 13:06:59.008002 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2c8336c-0733-4e20-85ec-062e07b6fdc0-proxy-tls\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.008454 master-0 kubenswrapper[17411]: I0223 13:06:59.008051 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2c8336c-0733-4e20-85ec-062e07b6fdc0-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.011119 master-0 kubenswrapper[17411]: I0223 13:06:59.011065 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2c8336c-0733-4e20-85ec-062e07b6fdc0-proxy-tls\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.013055 master-0 kubenswrapper[17411]: I0223 13:06:59.012911 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2c8336c-0733-4e20-85ec-062e07b6fdc0-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.033543 master-0 kubenswrapper[17411]: I0223 13:06:59.033491 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mpc\" (UniqueName: \"kubernetes.io/projected/c2c8336c-0733-4e20-85ec-062e07b6fdc0-kube-api-access-z5mpc\") pod \"machine-config-controller-54cb48566c-p9r9b\" (UID: \"c2c8336c-0733-4e20-85ec-062e07b6fdc0\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.194270 master-0 kubenswrapper[17411]: I0223 13:06:59.194185 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" Feb 23 13:06:59.195363 master-0 kubenswrapper[17411]: I0223 13:06:59.194987 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:59.201734 master-0 kubenswrapper[17411]: I0223 13:06:59.200185 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:06:59.620417 master-0 kubenswrapper[17411]: I0223 13:06:59.619318 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:06:59.620417 master-0 kubenswrapper[17411]: I0223 13:06:59.619602 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="39fda2f491fa2a50f4f315b834ed6d23" containerName="startup-monitor" containerID="cri-o://7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f" gracePeriod=5 Feb 23 13:06:59.670353 master-0 kubenswrapper[17411]: I0223 13:06:59.670286 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b"] Feb 23 13:06:59.843984 master-0 kubenswrapper[17411]: I0223 13:06:59.843900 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:59.844535 master-0 kubenswrapper[17411]: I0223 13:06:59.844487 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:59.844535 master-0 kubenswrapper[17411]: I0223 13:06:59.844533 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:06:59.883736 master-0 kubenswrapper[17411]: I0223 13:06:59.883690 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:06:59.954792 master-0 kubenswrapper[17411]: I0223 13:06:59.954738 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq"] Feb 23 13:06:59.955045 master-0 kubenswrapper[17411]: E0223 13:06:59.955017 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fda2f491fa2a50f4f315b834ed6d23" containerName="startup-monitor" Feb 23 13:06:59.955045 master-0 kubenswrapper[17411]: I0223 13:06:59.955045 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fda2f491fa2a50f4f315b834ed6d23" containerName="startup-monitor" Feb 23 13:06:59.955260 master-0 kubenswrapper[17411]: I0223 13:06:59.955216 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fda2f491fa2a50f4f315b834ed6d23" containerName="startup-monitor" Feb 23 13:06:59.955697 master-0 kubenswrapper[17411]: I0223 13:06:59.955671 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:06:59.959402 master-0 kubenswrapper[17411]: I0223 13:06:59.959363 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd"] Feb 23 13:06:59.960573 master-0 kubenswrapper[17411]: I0223 13:06:59.960536 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:06:59.961930 master-0 kubenswrapper[17411]: I0223 13:06:59.961890 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:59.962148 master-0 kubenswrapper[17411]: I0223 13:06:59.962108 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:06:59.964804 master-0 kubenswrapper[17411]: I0223 13:06:59.964771 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-v92vx"] Feb 23 13:06:59.965679 master-0 kubenswrapper[17411]: I0223 13:06:59.965635 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:06:59.965755 master-0 kubenswrapper[17411]: I0223 13:06:59.965731 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:06:59.965897 master-0 kubenswrapper[17411]: I0223 13:06:59.965822 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:06:59.968206 master-0 kubenswrapper[17411]: I0223 13:06:59.968147 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:06:59.968624 master-0 kubenswrapper[17411]: I0223 13:06:59.968473 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:06:59.968624 master-0 kubenswrapper[17411]: I0223 13:06:59.968531 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:06:59.969303 master-0 kubenswrapper[17411]: I0223 13:06:59.968730 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:06:59.969303 master-0 kubenswrapper[17411]: I0223 13:06:59.968769 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:06:59.971096 master-0 kubenswrapper[17411]: I0223 13:06:59.971041 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:06:59.973717 master-0 kubenswrapper[17411]: I0223 13:06:59.973666 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7"] Feb 23 13:06:59.974727 master-0 kubenswrapper[17411]: I0223 13:06:59.974684 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" Feb 23 13:06:59.978665 master-0 kubenswrapper[17411]: I0223 13:06:59.978620 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd"] Feb 23 13:06:59.986556 master-0 kubenswrapper[17411]: I0223 13:06:59.986050 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq"] Feb 23 13:06:59.994654 master-0 kubenswrapper[17411]: I0223 13:06:59.991645 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7"] Feb 23 13:06:59.994654 master-0 kubenswrapper[17411]: I0223 13:06:59.992628 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rhj5d"] Feb 23 13:06:59.994654 master-0 kubenswrapper[17411]: I0223 13:06:59.993727 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:06:59.996385 master-0 kubenswrapper[17411]: I0223 13:06:59.996122 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:06:59.996385 master-0 kubenswrapper[17411]: I0223 13:06:59.996396 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:06:59.996936 master-0 kubenswrapper[17411]: I0223 13:06:59.996879 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:07:00.008794 master-0 kubenswrapper[17411]: I0223 13:07:00.008730 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rhj5d"] Feb 23 13:07:00.123347 master-0 kubenswrapper[17411]: I0223 13:07:00.123266 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/63a753f6-ddb1-4570-9e14-f81a87411014-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-5fngq\" (UID: \"63a753f6-ddb1-4570-9e14-f81a87411014\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:00.123347 master-0 kubenswrapper[17411]: I0223 13:07:00.123350 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gzb\" (UniqueName: \"kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.123662 master-0 kubenswrapper[17411]: I0223 13:07:00.123385 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-stats-auth\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.123662 master-0 kubenswrapper[17411]: I0223 13:07:00.123415 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grk6t\" (UniqueName: \"kubernetes.io/projected/45f105e4-1a49-4bb7-8652-5c1290407353-kube-api-access-grk6t\") pod \"network-check-source-58fb6744f5-hjhz7\" (UID: \"45f105e4-1a49-4bb7-8652-5c1290407353\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" Feb 23 13:07:00.123662 master-0 kubenswrapper[17411]: I0223 13:07:00.123442 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fth9\" (UniqueName: \"kubernetes.io/projected/8994f73c-03b7-480e-b527-78a1f2fd8b3c-kube-api-access-5fth9\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.123662 master-0 kubenswrapper[17411]: I0223 13:07:00.123624 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.123779 master-0 kubenswrapper[17411]: I0223 13:07:00.123744 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8994f73c-03b7-480e-b527-78a1f2fd8b3c-service-ca-bundle\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.123924 master-0 kubenswrapper[17411]: I0223 13:07:00.123899 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.123971 master-0 kubenswrapper[17411]: I0223 13:07:00.123938 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp789\" (UniqueName: \"kubernetes.io/projected/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-kube-api-access-rp789\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.124002 master-0 kubenswrapper[17411]: I0223 13:07:00.123973 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-default-certificate\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.124157 master-0 kubenswrapper[17411]: I0223 13:07:00.124095 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.124618 master-0 kubenswrapper[17411]: I0223 13:07:00.124574 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-metrics-certs\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.153870 master-0 kubenswrapper[17411]: I0223 13:07:00.153721 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" event={"ID":"c2c8336c-0733-4e20-85ec-062e07b6fdc0","Type":"ContainerStarted","Data":"655eaec546d4e144b9492bb58212da8ee3c6114d4022fec4f227d4cdebdfd0f3"} Feb 23 13:07:00.153870 master-0 kubenswrapper[17411]: I0223 13:07:00.153782 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" event={"ID":"c2c8336c-0733-4e20-85ec-062e07b6fdc0","Type":"ContainerStarted","Data":"d189d5e12511ea80f4cdc17d241c4679d026c6da1f0e8d962f34e26c49ed72ca"} Feb 23 13:07:00.153870 master-0 kubenswrapper[17411]: I0223 13:07:00.153796 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" event={"ID":"c2c8336c-0733-4e20-85ec-062e07b6fdc0","Type":"ContainerStarted","Data":"9d0ee0578588d674266409d02994c6f91bb8b578f9d5dd0bcee0bd81e843a67e"} Feb 23 13:07:00.154116 master-0 kubenswrapper[17411]: I0223 13:07:00.154081 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: I0223 13:07:00.226612 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: I0223 13:07:00.226684 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp789\" (UniqueName: \"kubernetes.io/projected/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-kube-api-access-rp789\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: I0223 13:07:00.227098 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-default-certificate\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: I0223 13:07:00.227181 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: E0223 13:07:00.227112 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: E0223 13:07:00.227393 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:00.727371509 +0000 UTC m=+14.154878166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:00.228213 master-0 kubenswrapper[17411]: I0223 13:07:00.227782 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-metrics-certs\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228522 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/63a753f6-ddb1-4570-9e14-f81a87411014-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-5fngq\" (UID: \"63a753f6-ddb1-4570-9e14-f81a87411014\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228696 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gzb\" (UniqueName: \"kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228725 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-stats-auth\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228760 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grk6t\" (UniqueName: \"kubernetes.io/projected/45f105e4-1a49-4bb7-8652-5c1290407353-kube-api-access-grk6t\") pod \"network-check-source-58fb6744f5-hjhz7\" (UID: \"45f105e4-1a49-4bb7-8652-5c1290407353\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228789 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fth9\" (UniqueName: \"kubernetes.io/projected/8994f73c-03b7-480e-b527-78a1f2fd8b3c-kube-api-access-5fth9\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.228981 master-0 kubenswrapper[17411]: I0223 13:07:00.228974 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.229232 master-0 kubenswrapper[17411]: I0223 13:07:00.229079 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8994f73c-03b7-480e-b527-78a1f2fd8b3c-service-ca-bundle\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.230383 master-0 kubenswrapper[17411]: I0223 13:07:00.229980 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8994f73c-03b7-480e-b527-78a1f2fd8b3c-service-ca-bundle\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.232451 master-0 kubenswrapper[17411]: I0223 13:07:00.232406 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-metrics-certs\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.232520 master-0 kubenswrapper[17411]: I0223 13:07:00.232467 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.233466 master-0 kubenswrapper[17411]: I0223 13:07:00.233412 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/63a753f6-ddb1-4570-9e14-f81a87411014-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-5fngq\" (UID: \"63a753f6-ddb1-4570-9e14-f81a87411014\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:00.235296 master-0 kubenswrapper[17411]: I0223 13:07:00.234336 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-stats-auth\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.235296 master-0 kubenswrapper[17411]: I0223 13:07:00.234975 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8994f73c-03b7-480e-b527-78a1f2fd8b3c-default-certificate\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.264264 master-0 kubenswrapper[17411]: I0223 13:07:00.255509 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grk6t\" (UniqueName: \"kubernetes.io/projected/45f105e4-1a49-4bb7-8652-5c1290407353-kube-api-access-grk6t\") pod \"network-check-source-58fb6744f5-hjhz7\" (UID: \"45f105e4-1a49-4bb7-8652-5c1290407353\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" Feb 23 13:07:00.264264 master-0 kubenswrapper[17411]: I0223 13:07:00.256292 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.264264 master-0 kubenswrapper[17411]: I0223 13:07:00.256684 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp789\" (UniqueName: \"kubernetes.io/projected/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-kube-api-access-rp789\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.264264 master-0 kubenswrapper[17411]: I0223 13:07:00.260469 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fth9\" (UniqueName: \"kubernetes.io/projected/8994f73c-03b7-480e-b527-78a1f2fd8b3c-kube-api-access-5fth9\") pod \"router-default-7b65dc9fcb-v92vx\" (UID: \"8994f73c-03b7-480e-b527-78a1f2fd8b3c\") " pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.264264 master-0 kubenswrapper[17411]: I0223 13:07:00.261495 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gzb\" (UniqueName: \"kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb\") pod \"collect-profiles-29530860-9f5kd\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.295268 master-0 kubenswrapper[17411]: I0223 13:07:00.293521 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:00.311610 master-0 kubenswrapper[17411]: I0223 13:07:00.311545 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:00.351322 master-0 kubenswrapper[17411]: I0223 13:07:00.349523 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:00.376267 master-0 kubenswrapper[17411]: I0223 13:07:00.373274 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" Feb 23 13:07:00.408561 master-0 kubenswrapper[17411]: I0223 13:07:00.408530 17411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:07:00.750183 master-0 kubenswrapper[17411]: I0223 13:07:00.749448 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:00.750183 master-0 kubenswrapper[17411]: E0223 13:07:00.749601 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:00.750183 master-0 kubenswrapper[17411]: E0223 13:07:00.749801 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:01.749782289 +0000 UTC m=+15.177288906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:00.762707 master-0 kubenswrapper[17411]: I0223 13:07:00.762569 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" podStartSLOduration=2.75480458 podStartE2EDuration="2.75480458s" podCreationTimestamp="2026-02-23 13:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:00.178852745 +0000 UTC m=+13.606359402" watchObservedRunningTime="2026-02-23 13:07:00.75480458 +0000 UTC m=+14.182311177" Feb 23 13:07:00.765801 master-0 kubenswrapper[17411]: W0223 13:07:00.765732 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63a753f6_ddb1_4570_9e14_f81a87411014.slice/crio-2971fcf972e9aafbe01ca1c6e870a1105dec166d781bbb51719c2119ce3137b2 WatchSource:0}: Error finding container 2971fcf972e9aafbe01ca1c6e870a1105dec166d781bbb51719c2119ce3137b2: Status 404 returned error can't find the container with id 2971fcf972e9aafbe01ca1c6e870a1105dec166d781bbb51719c2119ce3137b2 Feb 23 13:07:00.766095 master-0 kubenswrapper[17411]: I0223 13:07:00.766044 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq"] Feb 23 13:07:00.883141 master-0 kubenswrapper[17411]: I0223 13:07:00.883090 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7"] Feb 23 13:07:00.883318 master-0 kubenswrapper[17411]: I0223 13:07:00.883264 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd"] Feb 23 13:07:00.887817 master-0 kubenswrapper[17411]: W0223 13:07:00.887690 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45f105e4_1a49_4bb7_8652_5c1290407353.slice/crio-4e3b9ca64f3c4567f52d2c53d286e18654dfb97071a496722caca3a497e04193 WatchSource:0}: Error finding container 4e3b9ca64f3c4567f52d2c53d286e18654dfb97071a496722caca3a497e04193: Status 404 returned error can't find the container with id 4e3b9ca64f3c4567f52d2c53d286e18654dfb97071a496722caca3a497e04193 Feb 23 13:07:00.913418 master-0 kubenswrapper[17411]: I0223 13:07:00.913374 17411 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:07:01.167372 master-0 kubenswrapper[17411]: I0223 13:07:01.167305 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" event={"ID":"63a753f6-ddb1-4570-9e14-f81a87411014","Type":"ContainerStarted","Data":"2971fcf972e9aafbe01ca1c6e870a1105dec166d781bbb51719c2119ce3137b2"} Feb 23 13:07:01.176377 master-0 kubenswrapper[17411]: I0223 13:07:01.176280 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" event={"ID":"8994f73c-03b7-480e-b527-78a1f2fd8b3c","Type":"ContainerStarted","Data":"cc3db56ed0f88b8f22950db78d2e3f4d84ceadf3568d00b8c68a8afa51e9565a"} Feb 23 13:07:01.192670 master-0 kubenswrapper[17411]: I0223 13:07:01.192594 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" event={"ID":"45f105e4-1a49-4bb7-8652-5c1290407353","Type":"ContainerStarted","Data":"1953d4b109343384d174c3eb6b2f9c2842129631315da7c7d06845bf3a29e408"} Feb 23 13:07:01.192670 master-0 kubenswrapper[17411]: I0223 13:07:01.192670 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" event={"ID":"45f105e4-1a49-4bb7-8652-5c1290407353","Type":"ContainerStarted","Data":"4e3b9ca64f3c4567f52d2c53d286e18654dfb97071a496722caca3a497e04193"} Feb 23 13:07:01.198932 master-0 kubenswrapper[17411]: I0223 13:07:01.198884 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" event={"ID":"a82698b6-5a88-4fc7-915c-e56e32aafa81","Type":"ContainerStarted","Data":"e0dad80a19146271287d7e69814f5f02a87e0d1606d5afb0ccce12f25e7c789f"} Feb 23 13:07:01.198932 master-0 kubenswrapper[17411]: I0223 13:07:01.198935 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" event={"ID":"a82698b6-5a88-4fc7-915c-e56e32aafa81","Type":"ContainerStarted","Data":"37019f9a69363590fc785ac0e6fb4146fd34ab41bce22c42dfc2872a8ebef287"} Feb 23 13:07:01.218727 master-0 kubenswrapper[17411]: I0223 13:07:01.218638 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-hjhz7" podStartSLOduration=410.218609025 podStartE2EDuration="6m50.218609025s" podCreationTimestamp="2026-02-23 13:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:01.217842063 +0000 UTC m=+14.645348670" watchObservedRunningTime="2026-02-23 13:07:01.218609025 +0000 UTC m=+14.646115622" Feb 23 13:07:01.808746 master-0 kubenswrapper[17411]: I0223 13:07:01.808669 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:01.809359 master-0 kubenswrapper[17411]: E0223 13:07:01.808867 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:01.809359 master-0 kubenswrapper[17411]: E0223 13:07:01.808924 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:03.808906714 +0000 UTC m=+17.236413311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:02.152832 master-0 kubenswrapper[17411]: I0223 13:07:02.152727 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:07:02.181737 master-0 kubenswrapper[17411]: I0223 13:07:02.181282 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" podStartSLOduration=422.181265689 podStartE2EDuration="7m2.181265689s" podCreationTimestamp="2026-02-23 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:01.263700682 +0000 UTC m=+14.691207279" watchObservedRunningTime="2026-02-23 13:07:02.181265689 +0000 UTC m=+15.608772286" Feb 23 13:07:02.194409 master-0 kubenswrapper[17411]: I0223 13:07:02.194362 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r8xxs" Feb 23 13:07:03.329356 master-0 kubenswrapper[17411]: I0223 13:07:03.329290 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cwh7s"] Feb 23 13:07:03.330063 master-0 kubenswrapper[17411]: I0223 13:07:03.330033 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.333721 master-0 kubenswrapper[17411]: I0223 13:07:03.333680 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:07:03.333811 master-0 kubenswrapper[17411]: I0223 13:07:03.333765 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:07:03.333977 master-0 kubenswrapper[17411]: I0223 13:07:03.333934 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-8ns2k" Feb 23 13:07:03.437456 master-0 kubenswrapper[17411]: I0223 13:07:03.437383 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-node-bootstrap-token\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.437456 master-0 kubenswrapper[17411]: I0223 13:07:03.437469 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpgx\" (UniqueName: \"kubernetes.io/projected/65e77d82-6aeb-4f84-acad-e70c095996ed-kube-api-access-7zpgx\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.437732 master-0 kubenswrapper[17411]: I0223 13:07:03.437502 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-certs\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.539134 master-0 kubenswrapper[17411]: I0223 13:07:03.539056 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zpgx\" (UniqueName: \"kubernetes.io/projected/65e77d82-6aeb-4f84-acad-e70c095996ed-kube-api-access-7zpgx\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.539134 master-0 kubenswrapper[17411]: I0223 13:07:03.539130 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-certs\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.539429 master-0 kubenswrapper[17411]: I0223 13:07:03.539200 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-node-bootstrap-token\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.546323 master-0 kubenswrapper[17411]: I0223 13:07:03.544031 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-node-bootstrap-token\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.546323 master-0 kubenswrapper[17411]: I0223 13:07:03.544949 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65e77d82-6aeb-4f84-acad-e70c095996ed-certs\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.559357 master-0 kubenswrapper[17411]: I0223 13:07:03.559298 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zpgx\" (UniqueName: \"kubernetes.io/projected/65e77d82-6aeb-4f84-acad-e70c095996ed-kube-api-access-7zpgx\") pod \"machine-config-server-cwh7s\" (UID: \"65e77d82-6aeb-4f84-acad-e70c095996ed\") " pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.659548 master-0 kubenswrapper[17411]: I0223 13:07:03.659476 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cwh7s" Feb 23 13:07:03.681750 master-0 kubenswrapper[17411]: W0223 13:07:03.681659 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65e77d82_6aeb_4f84_acad_e70c095996ed.slice/crio-fa80ef7755e2e35d6b3d7aa8bb2920afc2ad090b1c15652522c77fcca4083249 WatchSource:0}: Error finding container fa80ef7755e2e35d6b3d7aa8bb2920afc2ad090b1c15652522c77fcca4083249: Status 404 returned error can't find the container with id fa80ef7755e2e35d6b3d7aa8bb2920afc2ad090b1c15652522c77fcca4083249 Feb 23 13:07:03.844086 master-0 kubenswrapper[17411]: I0223 13:07:03.844019 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:03.844346 master-0 kubenswrapper[17411]: E0223 13:07:03.844187 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:03.844346 master-0 kubenswrapper[17411]: E0223 13:07:03.844237 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:07.844221164 +0000 UTC m=+21.271727761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:04.237372 master-0 kubenswrapper[17411]: I0223 13:07:04.237311 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" event={"ID":"8994f73c-03b7-480e-b527-78a1f2fd8b3c","Type":"ContainerStarted","Data":"d80605bb5136c6c12423a126640ae8d3ee05044ad450daf4c6b3bb9b0f6198c6"} Feb 23 13:07:04.243757 master-0 kubenswrapper[17411]: I0223 13:07:04.243711 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cwh7s" event={"ID":"65e77d82-6aeb-4f84-acad-e70c095996ed","Type":"ContainerStarted","Data":"971a867f0d3e6046309dbaecfed9de68cc97a143456aa464669e64cf8a61802c"} Feb 23 13:07:04.243992 master-0 kubenswrapper[17411]: I0223 13:07:04.243978 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cwh7s" event={"ID":"65e77d82-6aeb-4f84-acad-e70c095996ed","Type":"ContainerStarted","Data":"fa80ef7755e2e35d6b3d7aa8bb2920afc2ad090b1c15652522c77fcca4083249"} Feb 23 13:07:04.245540 master-0 kubenswrapper[17411]: I0223 13:07:04.245518 17411 generic.go:334] "Generic (PLEG): container finished" podID="a82698b6-5a88-4fc7-915c-e56e32aafa81" containerID="e0dad80a19146271287d7e69814f5f02a87e0d1606d5afb0ccce12f25e7c789f" exitCode=0 Feb 23 13:07:04.245650 master-0 kubenswrapper[17411]: I0223 13:07:04.245580 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" event={"ID":"a82698b6-5a88-4fc7-915c-e56e32aafa81","Type":"ContainerDied","Data":"e0dad80a19146271287d7e69814f5f02a87e0d1606d5afb0ccce12f25e7c789f"} Feb 23 13:07:04.257527 master-0 kubenswrapper[17411]: I0223 13:07:04.249004 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" event={"ID":"63a753f6-ddb1-4570-9e14-f81a87411014","Type":"ContainerStarted","Data":"3f367f8885515b0bb3bf96dfe921042b79dbb7fbb2ef7d69c603a884d94449a7"} Feb 23 13:07:04.257527 master-0 kubenswrapper[17411]: I0223 13:07:04.249740 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:04.257527 master-0 kubenswrapper[17411]: I0223 13:07:04.255784 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-zwmzz"] Feb 23 13:07:04.257527 master-0 kubenswrapper[17411]: I0223 13:07:04.256553 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.260584 master-0 kubenswrapper[17411]: I0223 13:07:04.260558 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8bvc9" Feb 23 13:07:04.260584 master-0 kubenswrapper[17411]: I0223 13:07:04.260572 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:07:04.266256 master-0 kubenswrapper[17411]: I0223 13:07:04.266179 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" Feb 23 13:07:04.268428 master-0 kubenswrapper[17411]: I0223 13:07:04.266573 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:07:04.268428 master-0 kubenswrapper[17411]: I0223 13:07:04.267149 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:07:04.268428 master-0 kubenswrapper[17411]: I0223 13:07:04.267637 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:07:04.268428 master-0 kubenswrapper[17411]: I0223 13:07:04.267644 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:07:04.277214 master-0 kubenswrapper[17411]: I0223 13:07:04.277161 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-zwmzz"] Feb 23 13:07:04.287353 master-0 kubenswrapper[17411]: I0223 13:07:04.285063 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" podStartSLOduration=308.187810699 podStartE2EDuration="5m11.285042131s" podCreationTimestamp="2026-02-23 13:01:53 +0000 UTC" firstStartedPulling="2026-02-23 13:07:00.408451447 +0000 UTC m=+13.835958034" lastFinishedPulling="2026-02-23 13:07:03.505682869 +0000 UTC m=+16.933189466" observedRunningTime="2026-02-23 13:07:04.284464855 +0000 UTC m=+17.711971452" watchObservedRunningTime="2026-02-23 13:07:04.285042131 +0000 UTC m=+17.712548728" Feb 23 13:07:04.333223 master-0 kubenswrapper[17411]: I0223 13:07:04.332458 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cwh7s" podStartSLOduration=1.332435393 podStartE2EDuration="1.332435393s" podCreationTimestamp="2026-02-23 13:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:04.331080915 +0000 UTC m=+17.758587512" watchObservedRunningTime="2026-02-23 13:07:04.332435393 +0000 UTC m=+17.759942000" Feb 23 13:07:04.349626 master-0 kubenswrapper[17411]: I0223 13:07:04.349219 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-5fngq" podStartSLOduration=309.61698777 podStartE2EDuration="5m12.349191664s" podCreationTimestamp="2026-02-23 13:01:52 +0000 UTC" firstStartedPulling="2026-02-23 13:07:00.76830602 +0000 UTC m=+14.195812637" lastFinishedPulling="2026-02-23 13:07:03.500509904 +0000 UTC m=+16.928016531" observedRunningTime="2026-02-23 13:07:04.347764864 +0000 UTC m=+17.775271471" watchObservedRunningTime="2026-02-23 13:07:04.349191664 +0000 UTC m=+17.776698281" Feb 23 13:07:04.350988 master-0 kubenswrapper[17411]: I0223 13:07:04.350939 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:04.354396 master-0 kubenswrapper[17411]: I0223 13:07:04.354343 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:04.359189 master-0 kubenswrapper[17411]: I0223 13:07:04.359091 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-config\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.359365 master-0 kubenswrapper[17411]: I0223 13:07:04.359262 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.359462 master-0 kubenswrapper[17411]: I0223 13:07:04.359379 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679fabb5-a261-402e-b5be-8fe7f0da0ec8-serving-cert\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.359695 master-0 kubenswrapper[17411]: I0223 13:07:04.359657 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mcx\" (UniqueName: \"kubernetes.io/projected/679fabb5-a261-402e-b5be-8fe7f0da0ec8-kube-api-access-p2mcx\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.461614 master-0 kubenswrapper[17411]: I0223 13:07:04.461556 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mcx\" (UniqueName: \"kubernetes.io/projected/679fabb5-a261-402e-b5be-8fe7f0da0ec8-kube-api-access-p2mcx\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.461846 master-0 kubenswrapper[17411]: I0223 13:07:04.461634 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-config\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.461846 master-0 kubenswrapper[17411]: I0223 13:07:04.461658 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.461846 master-0 kubenswrapper[17411]: I0223 13:07:04.461683 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679fabb5-a261-402e-b5be-8fe7f0da0ec8-serving-cert\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.463786 master-0 kubenswrapper[17411]: I0223 13:07:04.463727 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-config\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.464565 master-0 kubenswrapper[17411]: E0223 13:07:04.464505 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:04.964467243 +0000 UTC m=+18.391973850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:04.466276 master-0 kubenswrapper[17411]: I0223 13:07:04.466198 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679fabb5-a261-402e-b5be-8fe7f0da0ec8-serving-cert\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.483659 master-0 kubenswrapper[17411]: I0223 13:07:04.482546 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mcx\" (UniqueName: \"kubernetes.io/projected/679fabb5-a261-402e-b5be-8fe7f0da0ec8-kube-api-access-p2mcx\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.751505 master-0 kubenswrapper[17411]: I0223 13:07:04.751465 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_39fda2f491fa2a50f4f315b834ed6d23/startup-monitor/0.log" Feb 23 13:07:04.751736 master-0 kubenswrapper[17411]: I0223 13:07:04.751538 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:07:04.865978 master-0 kubenswrapper[17411]: I0223 13:07:04.865909 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") pod \"39fda2f491fa2a50f4f315b834ed6d23\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " Feb 23 13:07:04.865978 master-0 kubenswrapper[17411]: I0223 13:07:04.865965 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") pod \"39fda2f491fa2a50f4f315b834ed6d23\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " Feb 23 13:07:04.865978 master-0 kubenswrapper[17411]: I0223 13:07:04.866007 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") pod \"39fda2f491fa2a50f4f315b834ed6d23\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " Feb 23 13:07:04.866288 master-0 kubenswrapper[17411]: I0223 13:07:04.866042 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") pod \"39fda2f491fa2a50f4f315b834ed6d23\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " Feb 23 13:07:04.866288 master-0 kubenswrapper[17411]: I0223 13:07:04.866090 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "39fda2f491fa2a50f4f315b834ed6d23" (UID: "39fda2f491fa2a50f4f315b834ed6d23"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:04.866288 master-0 kubenswrapper[17411]: I0223 13:07:04.866147 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") pod \"39fda2f491fa2a50f4f315b834ed6d23\" (UID: \"39fda2f491fa2a50f4f315b834ed6d23\") " Feb 23 13:07:04.866288 master-0 kubenswrapper[17411]: I0223 13:07:04.866167 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log" (OuterVolumeSpecName: "var-log") pod "39fda2f491fa2a50f4f315b834ed6d23" (UID: "39fda2f491fa2a50f4f315b834ed6d23"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:04.866288 master-0 kubenswrapper[17411]: I0223 13:07:04.866228 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock" (OuterVolumeSpecName: "var-lock") pod "39fda2f491fa2a50f4f315b834ed6d23" (UID: "39fda2f491fa2a50f4f315b834ed6d23"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:04.866464 master-0 kubenswrapper[17411]: I0223 13:07:04.866323 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests" (OuterVolumeSpecName: "manifests") pod "39fda2f491fa2a50f4f315b834ed6d23" (UID: "39fda2f491fa2a50f4f315b834ed6d23"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:04.866648 master-0 kubenswrapper[17411]: I0223 13:07:04.866621 17411 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-manifests\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:04.866648 master-0 kubenswrapper[17411]: I0223 13:07:04.866639 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:04.866741 master-0 kubenswrapper[17411]: I0223 13:07:04.866652 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:04.866741 master-0 kubenswrapper[17411]: I0223 13:07:04.866663 17411 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-var-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:04.877357 master-0 kubenswrapper[17411]: I0223 13:07:04.877183 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "39fda2f491fa2a50f4f315b834ed6d23" (UID: "39fda2f491fa2a50f4f315b834ed6d23"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:04.878862 master-0 kubenswrapper[17411]: I0223 13:07:04.878605 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39fda2f491fa2a50f4f315b834ed6d23" path="/var/lib/kubelet/pods/39fda2f491fa2a50f4f315b834ed6d23/volumes" Feb 23 13:07:04.879027 master-0 kubenswrapper[17411]: I0223 13:07:04.878887 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 23 13:07:04.894939 master-0 kubenswrapper[17411]: I0223 13:07:04.894883 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:07:04.894939 master-0 kubenswrapper[17411]: I0223 13:07:04.894929 17411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="41aef424-29d7-4029-bbc7-9621cb74b311" Feb 23 13:07:04.896518 master-0 kubenswrapper[17411]: I0223 13:07:04.896475 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:07:04.896621 master-0 kubenswrapper[17411]: I0223 13:07:04.896517 17411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="41aef424-29d7-4029-bbc7-9621cb74b311" Feb 23 13:07:04.933613 master-0 kubenswrapper[17411]: I0223 13:07:04.933560 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-lthpv"] Feb 23 13:07:04.934852 master-0 kubenswrapper[17411]: I0223 13:07:04.934823 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:04.942708 master-0 kubenswrapper[17411]: I0223 13:07:04.941749 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:07:04.942708 master-0 kubenswrapper[17411]: I0223 13:07:04.941977 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:07:04.942708 master-0 kubenswrapper[17411]: I0223 13:07:04.942330 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjsc8" Feb 23 13:07:04.942708 master-0 kubenswrapper[17411]: I0223 13:07:04.942596 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:07:04.954439 master-0 kubenswrapper[17411]: I0223 13:07:04.954385 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-lthpv"] Feb 23 13:07:04.971307 master-0 kubenswrapper[17411]: I0223 13:07:04.971223 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:04.971503 master-0 kubenswrapper[17411]: I0223 13:07:04.971426 17411 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/39fda2f491fa2a50f4f315b834ed6d23-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:04.971629 master-0 kubenswrapper[17411]: E0223 13:07:04.971566 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:05.971546224 +0000 UTC m=+19.399052821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:05.072861 master-0 kubenswrapper[17411]: I0223 13:07:05.072745 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwt6c\" (UniqueName: \"kubernetes.io/projected/43f01c26-9770-4b24-a91e-461c4b21ba31-kube-api-access-nwt6c\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.072861 master-0 kubenswrapper[17411]: I0223 13:07:05.072814 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43f01c26-9770-4b24-a91e-461c4b21ba31-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.073186 master-0 kubenswrapper[17411]: I0223 13:07:05.072889 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.073186 master-0 kubenswrapper[17411]: I0223 13:07:05.073096 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: I0223 13:07:05.174971 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: I0223 13:07:05.175211 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: I0223 13:07:05.175262 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwt6c\" (UniqueName: \"kubernetes.io/projected/43f01c26-9770-4b24-a91e-461c4b21ba31-kube-api-access-nwt6c\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: I0223 13:07:05.175291 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43f01c26-9770-4b24-a91e-461c4b21ba31-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: E0223 13:07:05.175592 17411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:07:05.175905 master-0 kubenswrapper[17411]: E0223 13:07:05.175710 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls podName:43f01c26-9770-4b24-a91e-461c4b21ba31 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:05.675681111 +0000 UTC m=+19.103187788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-lthpv" (UID: "43f01c26-9770-4b24-a91e-461c4b21ba31") : secret "prometheus-operator-tls" not found Feb 23 13:07:05.176523 master-0 kubenswrapper[17411]: I0223 13:07:05.176202 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43f01c26-9770-4b24-a91e-461c4b21ba31-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.179536 master-0 kubenswrapper[17411]: I0223 13:07:05.179471 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.204119 master-0 kubenswrapper[17411]: I0223 13:07:05.204029 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwt6c\" (UniqueName: \"kubernetes.io/projected/43f01c26-9770-4b24-a91e-461c4b21ba31-kube-api-access-nwt6c\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.257099 master-0 kubenswrapper[17411]: I0223 13:07:05.257013 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_39fda2f491fa2a50f4f315b834ed6d23/startup-monitor/0.log" Feb 23 13:07:05.257359 master-0 kubenswrapper[17411]: I0223 13:07:05.257105 17411 generic.go:334] "Generic (PLEG): container finished" podID="39fda2f491fa2a50f4f315b834ed6d23" containerID="7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f" exitCode=137 Feb 23 13:07:05.257451 master-0 kubenswrapper[17411]: I0223 13:07:05.257408 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:07:05.257550 master-0 kubenswrapper[17411]: I0223 13:07:05.257512 17411 scope.go:117] "RemoveContainer" containerID="7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f" Feb 23 13:07:05.258284 master-0 kubenswrapper[17411]: I0223 13:07:05.258112 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:05.262087 master-0 kubenswrapper[17411]: I0223 13:07:05.262052 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-v92vx" Feb 23 13:07:05.275379 master-0 kubenswrapper[17411]: I0223 13:07:05.275343 17411 scope.go:117] "RemoveContainer" containerID="7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f" Feb 23 13:07:05.276682 master-0 kubenswrapper[17411]: E0223 13:07:05.276653 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f\": container with ID starting with 7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f not found: ID does not exist" containerID="7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f" Feb 23 13:07:05.284909 master-0 kubenswrapper[17411]: I0223 13:07:05.276698 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f"} err="failed to get container status \"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f\": rpc error: code = NotFound desc = could not find container \"7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f\": container with ID starting with 7c41d443ead911dab9f8a23e07a5dbc1e28b0dce65cdefd10a7cd72290173b8f not found: ID does not exist" Feb 23 13:07:05.529261 master-0 kubenswrapper[17411]: I0223 13:07:05.529206 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:05.683412 master-0 kubenswrapper[17411]: I0223 13:07:05.683339 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7gzb\" (UniqueName: \"kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb\") pod \"a82698b6-5a88-4fc7-915c-e56e32aafa81\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " Feb 23 13:07:05.683686 master-0 kubenswrapper[17411]: I0223 13:07:05.683424 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume\") pod \"a82698b6-5a88-4fc7-915c-e56e32aafa81\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " Feb 23 13:07:05.683686 master-0 kubenswrapper[17411]: I0223 13:07:05.683478 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume\") pod \"a82698b6-5a88-4fc7-915c-e56e32aafa81\" (UID: \"a82698b6-5a88-4fc7-915c-e56e32aafa81\") " Feb 23 13:07:05.684150 master-0 kubenswrapper[17411]: E0223 13:07:05.684109 17411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:07:05.684448 master-0 kubenswrapper[17411]: E0223 13:07:05.684212 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls podName:43f01c26-9770-4b24-a91e-461c4b21ba31 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:06.684186252 +0000 UTC m=+20.111692889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-lthpv" (UID: "43f01c26-9770-4b24-a91e-461c4b21ba31") : secret "prometheus-operator-tls" not found Feb 23 13:07:05.684807 master-0 kubenswrapper[17411]: I0223 13:07:05.684731 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume" (OuterVolumeSpecName: "config-volume") pod "a82698b6-5a88-4fc7-915c-e56e32aafa81" (UID: "a82698b6-5a88-4fc7-915c-e56e32aafa81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:05.684912 master-0 kubenswrapper[17411]: I0223 13:07:05.683936 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:05.685086 master-0 kubenswrapper[17411]: I0223 13:07:05.685046 17411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a82698b6-5a88-4fc7-915c-e56e32aafa81-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:05.687353 master-0 kubenswrapper[17411]: I0223 13:07:05.687235 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a82698b6-5a88-4fc7-915c-e56e32aafa81" (UID: "a82698b6-5a88-4fc7-915c-e56e32aafa81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:07:05.692858 master-0 kubenswrapper[17411]: I0223 13:07:05.692757 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb" (OuterVolumeSpecName: "kube-api-access-l7gzb") pod "a82698b6-5a88-4fc7-915c-e56e32aafa81" (UID: "a82698b6-5a88-4fc7-915c-e56e32aafa81"). InnerVolumeSpecName "kube-api-access-l7gzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:07:05.787015 master-0 kubenswrapper[17411]: I0223 13:07:05.786826 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7gzb\" (UniqueName: \"kubernetes.io/projected/a82698b6-5a88-4fc7-915c-e56e32aafa81-kube-api-access-l7gzb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:05.787415 master-0 kubenswrapper[17411]: I0223 13:07:05.787384 17411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a82698b6-5a88-4fc7-915c-e56e32aafa81-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:05.990691 master-0 kubenswrapper[17411]: I0223 13:07:05.990554 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:05.991069 master-0 kubenswrapper[17411]: E0223 13:07:05.990906 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:07.99086531 +0000 UTC m=+21.418371947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:06.269536 master-0 kubenswrapper[17411]: I0223 13:07:06.269440 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" event={"ID":"a82698b6-5a88-4fc7-915c-e56e32aafa81","Type":"ContainerDied","Data":"37019f9a69363590fc785ac0e6fb4146fd34ab41bce22c42dfc2872a8ebef287"} Feb 23 13:07:06.269536 master-0 kubenswrapper[17411]: I0223 13:07:06.269518 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37019f9a69363590fc785ac0e6fb4146fd34ab41bce22c42dfc2872a8ebef287" Feb 23 13:07:06.269816 master-0 kubenswrapper[17411]: I0223 13:07:06.269604 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-9f5kd" Feb 23 13:07:06.702234 master-0 kubenswrapper[17411]: I0223 13:07:06.702158 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:06.703195 master-0 kubenswrapper[17411]: E0223 13:07:06.702386 17411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:07:06.703195 master-0 kubenswrapper[17411]: E0223 13:07:06.702479 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls podName:43f01c26-9770-4b24-a91e-461c4b21ba31 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:08.702458619 +0000 UTC m=+22.129965216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-lthpv" (UID: "43f01c26-9770-4b24-a91e-461c4b21ba31") : secret "prometheus-operator-tls" not found Feb 23 13:07:07.978155 master-0 kubenswrapper[17411]: I0223 13:07:07.978045 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:07.979441 master-0 kubenswrapper[17411]: E0223 13:07:07.978429 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:07.979441 master-0 kubenswrapper[17411]: E0223 13:07:07.978483 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:15.978464928 +0000 UTC m=+29.405971525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:08.080673 master-0 kubenswrapper[17411]: I0223 13:07:08.080590 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:08.081064 master-0 kubenswrapper[17411]: E0223 13:07:08.080750 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:12.080724692 +0000 UTC m=+25.508231289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:08.791510 master-0 kubenswrapper[17411]: I0223 13:07:08.791202 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:08.791510 master-0 kubenswrapper[17411]: E0223 13:07:08.791431 17411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:07:08.791510 master-0 kubenswrapper[17411]: E0223 13:07:08.791528 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls podName:43f01c26-9770-4b24-a91e-461c4b21ba31 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:12.791507757 +0000 UTC m=+26.219014354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-lthpv" (UID: "43f01c26-9770-4b24-a91e-461c4b21ba31") : secret "prometheus-operator-tls" not found Feb 23 13:07:08.948220 master-0 kubenswrapper[17411]: I0223 13:07:08.948144 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:07:08.948473 master-0 kubenswrapper[17411]: I0223 13:07:08.948363 17411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:07:08.967092 master-0 kubenswrapper[17411]: I0223 13:07:08.967037 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45ncb" Feb 23 13:07:12.146573 master-0 kubenswrapper[17411]: I0223 13:07:12.146489 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:12.147307 master-0 kubenswrapper[17411]: E0223 13:07:12.146786 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:20.14675479 +0000 UTC m=+33.574261437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:12.179832 master-0 kubenswrapper[17411]: I0223 13:07:12.178752 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9gcb5"] Feb 23 13:07:12.179832 master-0 kubenswrapper[17411]: E0223 13:07:12.179092 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82698b6-5a88-4fc7-915c-e56e32aafa81" containerName="collect-profiles" Feb 23 13:07:12.179832 master-0 kubenswrapper[17411]: I0223 13:07:12.179109 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82698b6-5a88-4fc7-915c-e56e32aafa81" containerName="collect-profiles" Feb 23 13:07:12.179832 master-0 kubenswrapper[17411]: I0223 13:07:12.179237 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82698b6-5a88-4fc7-915c-e56e32aafa81" containerName="collect-profiles" Feb 23 13:07:12.179832 master-0 kubenswrapper[17411]: I0223 13:07:12.179698 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.183215 master-0 kubenswrapper[17411]: I0223 13:07:12.182966 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qdkmb" Feb 23 13:07:12.183408 master-0 kubenswrapper[17411]: I0223 13:07:12.183360 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:07:12.249536 master-0 kubenswrapper[17411]: I0223 13:07:12.249469 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1225fc31-6ddf-46fc-93d0-27b209cf103e-host\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.249536 master-0 kubenswrapper[17411]: I0223 13:07:12.249539 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzpk\" (UniqueName: \"kubernetes.io/projected/1225fc31-6ddf-46fc-93d0-27b209cf103e-kube-api-access-kkzpk\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.249827 master-0 kubenswrapper[17411]: I0223 13:07:12.249763 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1225fc31-6ddf-46fc-93d0-27b209cf103e-serviceca\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.350710 master-0 kubenswrapper[17411]: I0223 13:07:12.350627 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1225fc31-6ddf-46fc-93d0-27b209cf103e-host\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.350710 master-0 kubenswrapper[17411]: I0223 13:07:12.350696 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzpk\" (UniqueName: \"kubernetes.io/projected/1225fc31-6ddf-46fc-93d0-27b209cf103e-kube-api-access-kkzpk\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.350998 master-0 kubenswrapper[17411]: I0223 13:07:12.350812 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1225fc31-6ddf-46fc-93d0-27b209cf103e-host\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.351365 master-0 kubenswrapper[17411]: I0223 13:07:12.351055 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1225fc31-6ddf-46fc-93d0-27b209cf103e-serviceca\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.351898 master-0 kubenswrapper[17411]: I0223 13:07:12.351862 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1225fc31-6ddf-46fc-93d0-27b209cf103e-serviceca\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.370364 master-0 kubenswrapper[17411]: I0223 13:07:12.370156 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzpk\" (UniqueName: \"kubernetes.io/projected/1225fc31-6ddf-46fc-93d0-27b209cf103e-kube-api-access-kkzpk\") pod \"node-ca-9gcb5\" (UID: \"1225fc31-6ddf-46fc-93d0-27b209cf103e\") " pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.518987 master-0 kubenswrapper[17411]: I0223 13:07:12.518885 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9gcb5" Feb 23 13:07:12.546134 master-0 kubenswrapper[17411]: W0223 13:07:12.546062 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1225fc31_6ddf_46fc_93d0_27b209cf103e.slice/crio-63c039b974816d2ca779714a818069dcbebf79c79ed197b6e3a38077cd66edac WatchSource:0}: Error finding container 63c039b974816d2ca779714a818069dcbebf79c79ed197b6e3a38077cd66edac: Status 404 returned error can't find the container with id 63c039b974816d2ca779714a818069dcbebf79c79ed197b6e3a38077cd66edac Feb 23 13:07:12.856209 master-0 kubenswrapper[17411]: I0223 13:07:12.856078 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:12.856424 master-0 kubenswrapper[17411]: E0223 13:07:12.856331 17411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 23 13:07:12.856474 master-0 kubenswrapper[17411]: E0223 13:07:12.856438 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls podName:43f01c26-9770-4b24-a91e-461c4b21ba31 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:20.856413614 +0000 UTC m=+34.283920271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-lthpv" (UID: "43f01c26-9770-4b24-a91e-461c4b21ba31") : secret "prometheus-operator-tls" not found Feb 23 13:07:13.322357 master-0 kubenswrapper[17411]: I0223 13:07:13.322298 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9gcb5" event={"ID":"1225fc31-6ddf-46fc-93d0-27b209cf103e","Type":"ContainerStarted","Data":"63c039b974816d2ca779714a818069dcbebf79c79ed197b6e3a38077cd66edac"} Feb 23 13:07:15.338908 master-0 kubenswrapper[17411]: I0223 13:07:15.338849 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9gcb5" event={"ID":"1225fc31-6ddf-46fc-93d0-27b209cf103e","Type":"ContainerStarted","Data":"a2e29cc8d0228f49f5f892d93e7bfb8a02eeea622b20a11e3f292a49d0b385da"} Feb 23 13:07:15.365636 master-0 kubenswrapper[17411]: I0223 13:07:15.365563 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9gcb5" podStartSLOduration=1.334437197 podStartE2EDuration="3.365544648s" podCreationTimestamp="2026-02-23 13:07:12 +0000 UTC" firstStartedPulling="2026-02-23 13:07:12.548426148 +0000 UTC m=+25.975932745" lastFinishedPulling="2026-02-23 13:07:14.579533599 +0000 UTC m=+28.007040196" observedRunningTime="2026-02-23 13:07:15.364756956 +0000 UTC m=+28.792263563" watchObservedRunningTime="2026-02-23 13:07:15.365544648 +0000 UTC m=+28.793051245" Feb 23 13:07:16.008744 master-0 kubenswrapper[17411]: I0223 13:07:16.008678 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:16.008972 master-0 kubenswrapper[17411]: E0223 13:07:16.008889 17411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 23 13:07:16.009056 master-0 kubenswrapper[17411]: E0223 13:07:16.009023 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert podName:ce5a6b36-46f6-42b7-8240-ca27d4e47e30 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:32.008992101 +0000 UTC m=+45.436498738 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert") pod "ingress-canary-rhj5d" (UID: "ce5a6b36-46f6-42b7-8240-ca27d4e47e30") : secret "canary-serving-cert" not found Feb 23 13:07:20.164264 master-0 kubenswrapper[17411]: I0223 13:07:20.164168 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:20.165259 master-0 kubenswrapper[17411]: E0223 13:07:20.164492 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:36.164451799 +0000 UTC m=+49.591958436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:20.875407 master-0 kubenswrapper[17411]: I0223 13:07:20.875310 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:20.878783 master-0 kubenswrapper[17411]: I0223 13:07:20.878754 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/43f01c26-9770-4b24-a91e-461c4b21ba31-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-lthpv\" (UID: \"43f01c26-9770-4b24-a91e-461c4b21ba31\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:21.030154 master-0 kubenswrapper[17411]: I0223 13:07:21.030064 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:07:21.031259 master-0 kubenswrapper[17411]: I0223 13:07:21.031186 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.034649 master-0 kubenswrapper[17411]: I0223 13:07:21.034602 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:07:21.034856 master-0 kubenswrapper[17411]: I0223 13:07:21.034812 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-t58wm" Feb 23 13:07:21.053096 master-0 kubenswrapper[17411]: I0223 13:07:21.053046 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:07:21.093281 master-0 kubenswrapper[17411]: I0223 13:07:21.086881 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.093281 master-0 kubenswrapper[17411]: I0223 13:07:21.087006 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.093281 master-0 kubenswrapper[17411]: I0223 13:07:21.087076 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.153969 master-0 kubenswrapper[17411]: I0223 13:07:21.153798 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" Feb 23 13:07:21.187989 master-0 kubenswrapper[17411]: I0223 13:07:21.187935 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.188558 master-0 kubenswrapper[17411]: I0223 13:07:21.188040 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.188558 master-0 kubenswrapper[17411]: I0223 13:07:21.188109 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.188558 master-0 kubenswrapper[17411]: I0223 13:07:21.188172 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.188558 master-0 kubenswrapper[17411]: I0223 13:07:21.188192 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.205409 master-0 kubenswrapper[17411]: I0223 13:07:21.203848 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.354355 master-0 kubenswrapper[17411]: I0223 13:07:21.354303 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:21.544945 master-0 kubenswrapper[17411]: I0223 13:07:21.544874 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-lthpv"] Feb 23 13:07:21.553900 master-0 kubenswrapper[17411]: W0223 13:07:21.553659 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f01c26_9770_4b24_a91e_461c4b21ba31.slice/crio-63cc3ee8310f52de82fa45cd990dab0faa48dc6dc793578f4130b8f7c5a3d8b5 WatchSource:0}: Error finding container 63cc3ee8310f52de82fa45cd990dab0faa48dc6dc793578f4130b8f7c5a3d8b5: Status 404 returned error can't find the container with id 63cc3ee8310f52de82fa45cd990dab0faa48dc6dc793578f4130b8f7c5a3d8b5 Feb 23 13:07:21.788796 master-0 kubenswrapper[17411]: I0223 13:07:21.788677 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 23 13:07:21.794944 master-0 kubenswrapper[17411]: W0223 13:07:21.794864 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod93c37e01_20fe_43f0_b014_2aaf7a3c2b8b.slice/crio-cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea WatchSource:0}: Error finding container cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea: Status 404 returned error can't find the container with id cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea Feb 23 13:07:22.389280 master-0 kubenswrapper[17411]: I0223 13:07:22.389114 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" event={"ID":"43f01c26-9770-4b24-a91e-461c4b21ba31","Type":"ContainerStarted","Data":"63cc3ee8310f52de82fa45cd990dab0faa48dc6dc793578f4130b8f7c5a3d8b5"} Feb 23 13:07:22.391173 master-0 kubenswrapper[17411]: I0223 13:07:22.391132 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b","Type":"ContainerStarted","Data":"9f10bceb7445336e1df66d48a02ebd47ea2dc043a12ac6b767935a8559b8145f"} Feb 23 13:07:22.391173 master-0 kubenswrapper[17411]: I0223 13:07:22.391163 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b","Type":"ContainerStarted","Data":"cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea"} Feb 23 13:07:22.410613 master-0 kubenswrapper[17411]: I0223 13:07:22.410549 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.410528494 podStartE2EDuration="1.410528494s" podCreationTimestamp="2026-02-23 13:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:22.410355559 +0000 UTC m=+35.837862166" watchObservedRunningTime="2026-02-23 13:07:22.410528494 +0000 UTC m=+35.838035111" Feb 23 13:07:23.400895 master-0 kubenswrapper[17411]: I0223 13:07:23.400816 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" event={"ID":"43f01c26-9770-4b24-a91e-461c4b21ba31","Type":"ContainerStarted","Data":"5834e6af95c1cf12bf5b7ceae77a11e400fa7b818c2640f5e886dd55d4f5e475"} Feb 23 13:07:24.413022 master-0 kubenswrapper[17411]: I0223 13:07:24.412934 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" event={"ID":"43f01c26-9770-4b24-a91e-461c4b21ba31","Type":"ContainerStarted","Data":"990cd9e83e12bf966114f3c0f4d96537fad2d16b811a2fb64d0b15f5143f09a4"} Feb 23 13:07:24.440629 master-0 kubenswrapper[17411]: I0223 13:07:24.440528 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-lthpv" podStartSLOduration=18.795110776 podStartE2EDuration="20.440499958s" podCreationTimestamp="2026-02-23 13:07:04 +0000 UTC" firstStartedPulling="2026-02-23 13:07:21.558496767 +0000 UTC m=+34.986003364" lastFinishedPulling="2026-02-23 13:07:23.203885909 +0000 UTC m=+36.631392546" observedRunningTime="2026-02-23 13:07:24.439348236 +0000 UTC m=+37.866854833" watchObservedRunningTime="2026-02-23 13:07:24.440499958 +0000 UTC m=+37.868006605" Feb 23 13:07:26.341560 master-0 kubenswrapper[17411]: I0223 13:07:26.341495 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm"] Feb 23 13:07:26.343060 master-0 kubenswrapper[17411]: I0223 13:07:26.343022 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.346325 master-0 kubenswrapper[17411]: I0223 13:07:26.346283 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:07:26.346729 master-0 kubenswrapper[17411]: I0223 13:07:26.346698 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-z5ckf" Feb 23 13:07:26.348002 master-0 kubenswrapper[17411]: I0223 13:07:26.347947 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:07:26.362149 master-0 kubenswrapper[17411]: I0223 13:07:26.362084 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-xpdtc"] Feb 23 13:07:26.363753 master-0 kubenswrapper[17411]: I0223 13:07:26.363714 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.366568 master-0 kubenswrapper[17411]: I0223 13:07:26.366528 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:07:26.366925 master-0 kubenswrapper[17411]: I0223 13:07:26.366893 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:07:26.367169 master-0 kubenswrapper[17411]: I0223 13:07:26.367104 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.367327 master-0 kubenswrapper[17411]: I0223 13:07:26.367293 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9whd7" Feb 23 13:07:26.367374 master-0 kubenswrapper[17411]: I0223 13:07:26.367285 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klts6\" (UniqueName: \"kubernetes.io/projected/751e4191-f5e5-4e58-bc64-b3e23df18dec-kube-api-access-klts6\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.367414 master-0 kubenswrapper[17411]: I0223 13:07:26.367378 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/751e4191-f5e5-4e58-bc64-b3e23df18dec-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.367463 master-0 kubenswrapper[17411]: I0223 13:07:26.367432 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.391478 master-0 kubenswrapper[17411]: I0223 13:07:26.390032 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm"] Feb 23 13:07:26.401333 master-0 kubenswrapper[17411]: I0223 13:07:26.400486 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-lf487"] Feb 23 13:07:26.402274 master-0 kubenswrapper[17411]: I0223 13:07:26.402219 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.408197 master-0 kubenswrapper[17411]: I0223 13:07:26.408147 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pzqs9" Feb 23 13:07:26.408470 master-0 kubenswrapper[17411]: I0223 13:07:26.408287 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:07:26.408674 master-0 kubenswrapper[17411]: I0223 13:07:26.408599 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:07:26.429713 master-0 kubenswrapper[17411]: I0223 13:07:26.429194 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:07:26.442102 master-0 kubenswrapper[17411]: I0223 13:07:26.442039 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-lf487"] Feb 23 13:07:26.468336 master-0 kubenswrapper[17411]: I0223 13:07:26.468287 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klts6\" (UniqueName: \"kubernetes.io/projected/751e4191-f5e5-4e58-bc64-b3e23df18dec-kube-api-access-klts6\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.468336 master-0 kubenswrapper[17411]: I0223 13:07:26.468338 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468336 master-0 kubenswrapper[17411]: I0223 13:07:26.468364 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/751e4191-f5e5-4e58-bc64-b3e23df18dec-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.468336 master-0 kubenswrapper[17411]: I0223 13:07:26.468385 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468336 master-0 kubenswrapper[17411]: I0223 13:07:26.468413 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-sys\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468439 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468464 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-metrics-client-ca\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468486 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-root\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468506 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468531 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkjd\" (UniqueName: \"kubernetes.io/projected/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-api-access-8qkjd\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468554 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468575 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jjf\" (UniqueName: \"kubernetes.io/projected/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-kube-api-access-z9jjf\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468593 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468609 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-tls\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468628 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468648 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468665 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-wtmp\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.468822 master-0 kubenswrapper[17411]: I0223 13:07:26.468698 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-textfile\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.469764 master-0 kubenswrapper[17411]: I0223 13:07:26.469741 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/751e4191-f5e5-4e58-bc64-b3e23df18dec-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.475317 master-0 kubenswrapper[17411]: I0223 13:07:26.475287 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.476122 master-0 kubenswrapper[17411]: I0223 13:07:26.476075 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/751e4191-f5e5-4e58-bc64-b3e23df18dec-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.490487 master-0 kubenswrapper[17411]: I0223 13:07:26.490444 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klts6\" (UniqueName: \"kubernetes.io/projected/751e4191-f5e5-4e58-bc64-b3e23df18dec-kube-api-access-klts6\") pod \"openshift-state-metrics-6dbff8cb4c-b8hkm\" (UID: \"751e4191-f5e5-4e58-bc64-b3e23df18dec\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.570232 master-0 kubenswrapper[17411]: I0223 13:07:26.570160 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-textfile\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.570484 master-0 kubenswrapper[17411]: I0223 13:07:26.570271 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.570484 master-0 kubenswrapper[17411]: E0223 13:07:26.570406 17411 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 23 13:07:26.570484 master-0 kubenswrapper[17411]: E0223 13:07:26.570469 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls podName:cc9c39a8-1b8e-4c3c-9379-61d40d53104f nodeName:}" failed. No retries permitted until 2026-02-23 13:07:27.070447399 +0000 UTC m=+40.497953996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-lf487" (UID: "cc9c39a8-1b8e-4c3c-9379-61d40d53104f") : secret "kube-state-metrics-tls" not found Feb 23 13:07:26.570776 master-0 kubenswrapper[17411]: I0223 13:07:26.570729 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.570939 master-0 kubenswrapper[17411]: I0223 13:07:26.570920 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-sys\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.571106 master-0 kubenswrapper[17411]: I0223 13:07:26.571088 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-metrics-client-ca\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.571232 master-0 kubenswrapper[17411]: I0223 13:07:26.571214 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-root\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.571377 master-0 kubenswrapper[17411]: I0223 13:07:26.571342 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-root\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.571377 master-0 kubenswrapper[17411]: I0223 13:07:26.570969 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-sys\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.571377 master-0 kubenswrapper[17411]: I0223 13:07:26.571350 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.571512 master-0 kubenswrapper[17411]: I0223 13:07:26.570810 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-textfile\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572124 master-0 kubenswrapper[17411]: I0223 13:07:26.572078 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-metrics-client-ca\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572219 master-0 kubenswrapper[17411]: I0223 13:07:26.572192 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qkjd\" (UniqueName: \"kubernetes.io/projected/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-api-access-8qkjd\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.572302 master-0 kubenswrapper[17411]: I0223 13:07:26.572280 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jjf\" (UniqueName: \"kubernetes.io/projected/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-kube-api-access-z9jjf\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572353 master-0 kubenswrapper[17411]: I0223 13:07:26.572313 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-tls\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572353 master-0 kubenswrapper[17411]: I0223 13:07:26.572335 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572422 master-0 kubenswrapper[17411]: I0223 13:07:26.572385 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.572422 master-0 kubenswrapper[17411]: I0223 13:07:26.572411 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.572498 master-0 kubenswrapper[17411]: I0223 13:07:26.572437 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-wtmp\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.572736 master-0 kubenswrapper[17411]: I0223 13:07:26.572709 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-wtmp\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.573074 master-0 kubenswrapper[17411]: I0223 13:07:26.573054 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.573624 master-0 kubenswrapper[17411]: I0223 13:07:26.573588 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.573696 master-0 kubenswrapper[17411]: I0223 13:07:26.573673 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.574076 master-0 kubenswrapper[17411]: I0223 13:07:26.574055 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.576979 master-0 kubenswrapper[17411]: I0223 13:07:26.576942 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.577206 master-0 kubenswrapper[17411]: I0223 13:07:26.577077 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-node-exporter-tls\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.594822 master-0 kubenswrapper[17411]: I0223 13:07:26.594735 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jjf\" (UniqueName: \"kubernetes.io/projected/8f33650b-a63a-4ddd-9b9c-21a45d59e4ed-kube-api-access-z9jjf\") pod \"node-exporter-xpdtc\" (UID: \"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed\") " pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:26.597221 master-0 kubenswrapper[17411]: I0223 13:07:26.597165 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qkjd\" (UniqueName: \"kubernetes.io/projected/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-api-access-8qkjd\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:26.665462 master-0 kubenswrapper[17411]: I0223 13:07:26.665394 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" Feb 23 13:07:26.699384 master-0 kubenswrapper[17411]: I0223 13:07:26.697836 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-xpdtc" Feb 23 13:07:27.080264 master-0 kubenswrapper[17411]: I0223 13:07:27.080183 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:27.085320 master-0 kubenswrapper[17411]: I0223 13:07:27.085281 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc9c39a8-1b8e-4c3c-9379-61d40d53104f-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-lf487\" (UID: \"cc9c39a8-1b8e-4c3c-9379-61d40d53104f\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:27.100527 master-0 kubenswrapper[17411]: I0223 13:07:27.100478 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm"] Feb 23 13:07:27.106198 master-0 kubenswrapper[17411]: W0223 13:07:27.106130 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751e4191_f5e5_4e58_bc64_b3e23df18dec.slice/crio-c28a8a45e10c380a701de367c2f5638a832893fe2a9861a716f21cafb167e425 WatchSource:0}: Error finding container c28a8a45e10c380a701de367c2f5638a832893fe2a9861a716f21cafb167e425: Status 404 returned error can't find the container with id c28a8a45e10c380a701de367c2f5638a832893fe2a9861a716f21cafb167e425 Feb 23 13:07:27.361989 master-0 kubenswrapper[17411]: I0223 13:07:27.361951 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" Feb 23 13:07:27.369342 master-0 kubenswrapper[17411]: I0223 13:07:27.369305 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:07:27.377192 master-0 kubenswrapper[17411]: I0223 13:07:27.372955 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.377757 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.377804 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.377960 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.377995 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.377965 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.378153 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.378184 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.378237 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wnzv6" Feb 23 13:07:27.380302 master-0 kubenswrapper[17411]: I0223 13:07:27.379164 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387135 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387180 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387207 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387445 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387584 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.387938 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388012 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388064 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584sx\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388089 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388191 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388226 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.389670 master-0 kubenswrapper[17411]: I0223 13:07:27.388318 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.400529 master-0 kubenswrapper[17411]: I0223 13:07:27.400477 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:07:27.442701 master-0 kubenswrapper[17411]: I0223 13:07:27.442370 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-xpdtc" event={"ID":"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed","Type":"ContainerStarted","Data":"a179ec548814069baca1411b7a2ad1a7d0b71aae34c0bffdc7f63523ae256ca8"} Feb 23 13:07:27.456479 master-0 kubenswrapper[17411]: I0223 13:07:27.456390 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" event={"ID":"751e4191-f5e5-4e58-bc64-b3e23df18dec","Type":"ContainerStarted","Data":"36787183a5cc28dccdfbdd116dc6a82771d1d91a217d15db178c42bd9769a6dd"} Feb 23 13:07:27.456479 master-0 kubenswrapper[17411]: I0223 13:07:27.456482 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" event={"ID":"751e4191-f5e5-4e58-bc64-b3e23df18dec","Type":"ContainerStarted","Data":"10ec4530b569fc0c497f5447fa1446c7d34ce88ec3b331845c87c1078f89465d"} Feb 23 13:07:27.456479 master-0 kubenswrapper[17411]: I0223 13:07:27.456499 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" event={"ID":"751e4191-f5e5-4e58-bc64-b3e23df18dec","Type":"ContainerStarted","Data":"c28a8a45e10c380a701de367c2f5638a832893fe2a9861a716f21cafb167e425"} Feb 23 13:07:27.488931 master-0 kubenswrapper[17411]: I0223 13:07:27.488888 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584sx\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489047 master-0 kubenswrapper[17411]: I0223 13:07:27.488938 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489047 master-0 kubenswrapper[17411]: I0223 13:07:27.488984 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489047 master-0 kubenswrapper[17411]: I0223 13:07:27.489006 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489047 master-0 kubenswrapper[17411]: I0223 13:07:27.489026 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489047 master-0 kubenswrapper[17411]: I0223 13:07:27.489049 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489069 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489091 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489112 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489134 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489186 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.489217 master-0 kubenswrapper[17411]: I0223 13:07:27.489213 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.493015 master-0 kubenswrapper[17411]: I0223 13:07:27.492976 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.494142 master-0 kubenswrapper[17411]: I0223 13:07:27.494099 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.494500 master-0 kubenswrapper[17411]: I0223 13:07:27.494471 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.495295 master-0 kubenswrapper[17411]: E0223 13:07:27.495233 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:27.995204178 +0000 UTC m=+41.422710785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:27.496429 master-0 kubenswrapper[17411]: I0223 13:07:27.496031 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.496429 master-0 kubenswrapper[17411]: I0223 13:07:27.496225 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.498005 master-0 kubenswrapper[17411]: I0223 13:07:27.497974 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.498106 master-0 kubenswrapper[17411]: I0223 13:07:27.498077 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.498348 master-0 kubenswrapper[17411]: I0223 13:07:27.498319 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.498348 master-0 kubenswrapper[17411]: I0223 13:07:27.498338 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.499193 master-0 kubenswrapper[17411]: I0223 13:07:27.499150 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.516485 master-0 kubenswrapper[17411]: I0223 13:07:27.516428 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584sx\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:27.875907 master-0 kubenswrapper[17411]: I0223 13:07:27.875854 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-lf487"] Feb 23 13:07:27.888253 master-0 kubenswrapper[17411]: W0223 13:07:27.888178 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9c39a8_1b8e_4c3c_9379_61d40d53104f.slice/crio-54f43bc96dbf621e0ff2accb3c3f1f4a6ea0dbc2745de50a59672785e184b06c WatchSource:0}: Error finding container 54f43bc96dbf621e0ff2accb3c3f1f4a6ea0dbc2745de50a59672785e184b06c: Status 404 returned error can't find the container with id 54f43bc96dbf621e0ff2accb3c3f1f4a6ea0dbc2745de50a59672785e184b06c Feb 23 13:07:28.003155 master-0 kubenswrapper[17411]: I0223 13:07:28.003055 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:28.003471 master-0 kubenswrapper[17411]: E0223 13:07:28.003355 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:29.003321182 +0000 UTC m=+42.430827849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:28.465315 master-0 kubenswrapper[17411]: I0223 13:07:28.465188 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" event={"ID":"cc9c39a8-1b8e-4c3c-9379-61d40d53104f","Type":"ContainerStarted","Data":"54f43bc96dbf621e0ff2accb3c3f1f4a6ea0dbc2745de50a59672785e184b06c"} Feb 23 13:07:28.469159 master-0 kubenswrapper[17411]: I0223 13:07:28.469113 17411 generic.go:334] "Generic (PLEG): container finished" podID="8f33650b-a63a-4ddd-9b9c-21a45d59e4ed" containerID="48eb79e4e75fefff25dec6ca5d1358e4307ca8631e4392071285d61eb48584dc" exitCode=0 Feb 23 13:07:28.469232 master-0 kubenswrapper[17411]: I0223 13:07:28.469175 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-xpdtc" event={"ID":"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed","Type":"ContainerDied","Data":"48eb79e4e75fefff25dec6ca5d1358e4307ca8631e4392071285d61eb48584dc"} Feb 23 13:07:29.031534 master-0 kubenswrapper[17411]: I0223 13:07:29.031426 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:29.031750 master-0 kubenswrapper[17411]: E0223 13:07:29.031713 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:31.03169001 +0000 UTC m=+44.459196607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:29.370053 master-0 kubenswrapper[17411]: I0223 13:07:29.369778 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd"] Feb 23 13:07:29.372448 master-0 kubenswrapper[17411]: I0223 13:07:29.372139 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.374819 master-0 kubenswrapper[17411]: I0223 13:07:29.374773 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 23 13:07:29.374919 master-0 kubenswrapper[17411]: I0223 13:07:29.374866 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8odpr3ab0635p" Feb 23 13:07:29.374985 master-0 kubenswrapper[17411]: I0223 13:07:29.374952 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-qhbh8" Feb 23 13:07:29.375069 master-0 kubenswrapper[17411]: I0223 13:07:29.375033 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 23 13:07:29.375162 master-0 kubenswrapper[17411]: I0223 13:07:29.375049 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 23 13:07:29.375224 master-0 kubenswrapper[17411]: I0223 13:07:29.375182 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 23 13:07:29.377229 master-0 kubenswrapper[17411]: I0223 13:07:29.377049 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 23 13:07:29.401795 master-0 kubenswrapper[17411]: I0223 13:07:29.401731 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd"] Feb 23 13:07:29.479607 master-0 kubenswrapper[17411]: I0223 13:07:29.479529 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" event={"ID":"751e4191-f5e5-4e58-bc64-b3e23df18dec","Type":"ContainerStarted","Data":"f0ef204c71e17d79a77675b875d4738070bb5ae6963ca0e977c02fc3512819b7"} Feb 23 13:07:29.482334 master-0 kubenswrapper[17411]: I0223 13:07:29.482299 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-xpdtc" event={"ID":"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed","Type":"ContainerStarted","Data":"98ee3c4e29e48c209f5b8db2df17a3acda87e35920381fc119e0b6d81137384e"} Feb 23 13:07:29.482528 master-0 kubenswrapper[17411]: I0223 13:07:29.482512 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-xpdtc" event={"ID":"8f33650b-a63a-4ddd-9b9c-21a45d59e4ed","Type":"ContainerStarted","Data":"6ab01d30ac28739c954f9bfbc8b545d0335caf6ee53846778b3d3a363b3e62ea"} Feb 23 13:07:29.507348 master-0 kubenswrapper[17411]: I0223 13:07:29.506746 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-b8hkm" podStartSLOduration=2.292611355 podStartE2EDuration="3.501201065s" podCreationTimestamp="2026-02-23 13:07:26 +0000 UTC" firstStartedPulling="2026-02-23 13:07:27.417870948 +0000 UTC m=+40.845377545" lastFinishedPulling="2026-02-23 13:07:28.626460638 +0000 UTC m=+42.053967255" observedRunningTime="2026-02-23 13:07:29.497563403 +0000 UTC m=+42.925070010" watchObservedRunningTime="2026-02-23 13:07:29.501201065 +0000 UTC m=+42.928707662" Feb 23 13:07:29.518700 master-0 kubenswrapper[17411]: I0223 13:07:29.518606 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-xpdtc" podStartSLOduration=2.439704371 podStartE2EDuration="3.518584385s" podCreationTimestamp="2026-02-23 13:07:26 +0000 UTC" firstStartedPulling="2026-02-23 13:07:26.734552685 +0000 UTC m=+40.162059272" lastFinishedPulling="2026-02-23 13:07:27.813432689 +0000 UTC m=+41.240939286" observedRunningTime="2026-02-23 13:07:29.517117804 +0000 UTC m=+42.944624421" watchObservedRunningTime="2026-02-23 13:07:29.518584385 +0000 UTC m=+42.946090982" Feb 23 13:07:29.541916 master-0 kubenswrapper[17411]: I0223 13:07:29.541161 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-grpc-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.541916 master-0 kubenswrapper[17411]: I0223 13:07:29.541316 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a75814ac-2491-4c2b-9c62-de6cd5023f5b-metrics-client-ca\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.541916 master-0 kubenswrapper[17411]: I0223 13:07:29.541611 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.541916 master-0 kubenswrapper[17411]: I0223 13:07:29.541719 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjj7x\" (UniqueName: \"kubernetes.io/projected/a75814ac-2491-4c2b-9c62-de6cd5023f5b-kube-api-access-xjj7x\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.543265 master-0 kubenswrapper[17411]: I0223 13:07:29.543195 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.543326 master-0 kubenswrapper[17411]: I0223 13:07:29.543277 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.543362 master-0 kubenswrapper[17411]: I0223 13:07:29.543350 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.543569 master-0 kubenswrapper[17411]: I0223 13:07:29.543542 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.645551 master-0 kubenswrapper[17411]: I0223 13:07:29.645425 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-grpc-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.646164 master-0 kubenswrapper[17411]: I0223 13:07:29.646059 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a75814ac-2491-4c2b-9c62-de6cd5023f5b-metrics-client-ca\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.646358 master-0 kubenswrapper[17411]: I0223 13:07:29.646316 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.646514 master-0 kubenswrapper[17411]: I0223 13:07:29.646475 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjj7x\" (UniqueName: \"kubernetes.io/projected/a75814ac-2491-4c2b-9c62-de6cd5023f5b-kube-api-access-xjj7x\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.646742 master-0 kubenswrapper[17411]: I0223 13:07:29.646705 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.646782 master-0 kubenswrapper[17411]: I0223 13:07:29.646754 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.647052 master-0 kubenswrapper[17411]: I0223 13:07:29.647005 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.647103 master-0 kubenswrapper[17411]: I0223 13:07:29.647061 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.647187 master-0 kubenswrapper[17411]: I0223 13:07:29.647128 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a75814ac-2491-4c2b-9c62-de6cd5023f5b-metrics-client-ca\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.649187 master-0 kubenswrapper[17411]: I0223 13:07:29.649150 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.649370 master-0 kubenswrapper[17411]: I0223 13:07:29.649338 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-grpc-tls\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.650104 master-0 kubenswrapper[17411]: I0223 13:07:29.650053 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.652612 master-0 kubenswrapper[17411]: I0223 13:07:29.652577 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.654129 master-0 kubenswrapper[17411]: I0223 13:07:29.653934 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.655323 master-0 kubenswrapper[17411]: I0223 13:07:29.655273 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a75814ac-2491-4c2b-9c62-de6cd5023f5b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.666960 master-0 kubenswrapper[17411]: I0223 13:07:29.666791 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjj7x\" (UniqueName: \"kubernetes.io/projected/a75814ac-2491-4c2b-9c62-de6cd5023f5b-kube-api-access-xjj7x\") pod \"thanos-querier-5b5fbd9b56-86rpd\" (UID: \"a75814ac-2491-4c2b-9c62-de6cd5023f5b\") " pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:29.707984 master-0 kubenswrapper[17411]: I0223 13:07:29.707773 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:30.149387 master-0 kubenswrapper[17411]: I0223 13:07:30.149265 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd"] Feb 23 13:07:30.154884 master-0 kubenswrapper[17411]: W0223 13:07:30.154786 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda75814ac_2491_4c2b_9c62_de6cd5023f5b.slice/crio-8dbd862e529b3eeb60abd63d12bee5668023828e7d2bde70ab1f2ebe06333635 WatchSource:0}: Error finding container 8dbd862e529b3eeb60abd63d12bee5668023828e7d2bde70ab1f2ebe06333635: Status 404 returned error can't find the container with id 8dbd862e529b3eeb60abd63d12bee5668023828e7d2bde70ab1f2ebe06333635 Feb 23 13:07:30.492090 master-0 kubenswrapper[17411]: I0223 13:07:30.492008 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"8dbd862e529b3eeb60abd63d12bee5668023828e7d2bde70ab1f2ebe06333635"} Feb 23 13:07:30.495184 master-0 kubenswrapper[17411]: I0223 13:07:30.495082 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" event={"ID":"cc9c39a8-1b8e-4c3c-9379-61d40d53104f","Type":"ContainerStarted","Data":"7f87de4ff4937ac020da71cb0b2cb5dc420a7a2194f4c2c8671a8c9a4953e104"} Feb 23 13:07:30.495184 master-0 kubenswrapper[17411]: I0223 13:07:30.495153 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" event={"ID":"cc9c39a8-1b8e-4c3c-9379-61d40d53104f","Type":"ContainerStarted","Data":"6d85ec4edb8760ac7c193ae9e85646a9e0bc5df5ab61fef619d23a899b60f619"} Feb 23 13:07:30.495184 master-0 kubenswrapper[17411]: I0223 13:07:30.495190 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" event={"ID":"cc9c39a8-1b8e-4c3c-9379-61d40d53104f","Type":"ContainerStarted","Data":"9ea632952675dc6bb2ab8078c756f2c8283cad6ac2d8305c39de9ed6fb375be9"} Feb 23 13:07:30.520712 master-0 kubenswrapper[17411]: I0223 13:07:30.520533 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-lf487" podStartSLOduration=2.650802494 podStartE2EDuration="4.520495289s" podCreationTimestamp="2026-02-23 13:07:26 +0000 UTC" firstStartedPulling="2026-02-23 13:07:27.891453348 +0000 UTC m=+41.318959945" lastFinishedPulling="2026-02-23 13:07:29.761146143 +0000 UTC m=+43.188652740" observedRunningTime="2026-02-23 13:07:30.517843124 +0000 UTC m=+43.945349731" watchObservedRunningTime="2026-02-23 13:07:30.520495289 +0000 UTC m=+43.948001926" Feb 23 13:07:31.079319 master-0 kubenswrapper[17411]: I0223 13:07:31.078327 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:31.079319 master-0 kubenswrapper[17411]: E0223 13:07:31.078616 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:35.078581491 +0000 UTC m=+48.506088098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:31.788811 master-0 kubenswrapper[17411]: I0223 13:07:31.788747 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-755ccb876-g7rtk"] Feb 23 13:07:31.789819 master-0 kubenswrapper[17411]: I0223 13:07:31.789789 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.794815 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-twm6g" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.795049 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.795173 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.795432 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9jkd0a8djrqaf" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.795617 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:07:31.795856 master-0 kubenswrapper[17411]: I0223 13:07:31.795751 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:07:31.845474 master-0 kubenswrapper[17411]: I0223 13:07:31.807306 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-755ccb876-g7rtk"] Feb 23 13:07:31.900503 master-0 kubenswrapper[17411]: I0223 13:07:31.900401 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-server-tls\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.900760 master-0 kubenswrapper[17411]: I0223 13:07:31.900512 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.900896 master-0 kubenswrapper[17411]: I0223 13:07:31.900867 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-metrics-server-audit-profiles\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.901373 master-0 kubenswrapper[17411]: I0223 13:07:31.901300 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-client-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.902390 master-0 kubenswrapper[17411]: I0223 13:07:31.901396 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fk4\" (UniqueName: \"kubernetes.io/projected/58867c81-d4c7-4740-84c5-cb399cf415a1-kube-api-access-d8fk4\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.902623 master-0 kubenswrapper[17411]: I0223 13:07:31.902529 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/58867c81-d4c7-4740-84c5-cb399cf415a1-audit-log\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:31.902623 master-0 kubenswrapper[17411]: I0223 13:07:31.902584 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-client-certs\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004344 master-0 kubenswrapper[17411]: I0223 13:07:32.004265 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-metrics-server-audit-profiles\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004344 master-0 kubenswrapper[17411]: I0223 13:07:32.004330 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-client-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004344 master-0 kubenswrapper[17411]: I0223 13:07:32.004359 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8fk4\" (UniqueName: \"kubernetes.io/projected/58867c81-d4c7-4740-84c5-cb399cf415a1-kube-api-access-d8fk4\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004659 master-0 kubenswrapper[17411]: I0223 13:07:32.004384 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/58867c81-d4c7-4740-84c5-cb399cf415a1-audit-log\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004659 master-0 kubenswrapper[17411]: I0223 13:07:32.004404 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-client-certs\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004659 master-0 kubenswrapper[17411]: I0223 13:07:32.004425 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-server-tls\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.004659 master-0 kubenswrapper[17411]: I0223 13:07:32.004449 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.005399 master-0 kubenswrapper[17411]: I0223 13:07:32.005368 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.005957 master-0 kubenswrapper[17411]: I0223 13:07:32.005919 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/58867c81-d4c7-4740-84c5-cb399cf415a1-audit-log\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.006735 master-0 kubenswrapper[17411]: I0223 13:07:32.006702 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/58867c81-d4c7-4740-84c5-cb399cf415a1-metrics-server-audit-profiles\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.008981 master-0 kubenswrapper[17411]: I0223 13:07:32.008940 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-client-ca-bundle\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.009215 master-0 kubenswrapper[17411]: I0223 13:07:32.009182 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-client-certs\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.010571 master-0 kubenswrapper[17411]: I0223 13:07:32.010532 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/58867c81-d4c7-4740-84c5-cb399cf415a1-secret-metrics-server-tls\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.024800 master-0 kubenswrapper[17411]: I0223 13:07:32.024750 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8fk4\" (UniqueName: \"kubernetes.io/projected/58867c81-d4c7-4740-84c5-cb399cf415a1-kube-api-access-d8fk4\") pod \"metrics-server-755ccb876-g7rtk\" (UID: \"58867c81-d4c7-4740-84c5-cb399cf415a1\") " pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.085039 master-0 kubenswrapper[17411]: I0223 13:07:32.084817 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-688cc79566-cxl6b"] Feb 23 13:07:32.085920 master-0 kubenswrapper[17411]: I0223 13:07:32.085883 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:32.088065 master-0 kubenswrapper[17411]: I0223 13:07:32.087940 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-zj94f" Feb 23 13:07:32.088838 master-0 kubenswrapper[17411]: I0223 13:07:32.088165 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 23 13:07:32.105712 master-0 kubenswrapper[17411]: I0223 13:07:32.105657 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:32.106548 master-0 kubenswrapper[17411]: I0223 13:07:32.106486 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-688cc79566-cxl6b"] Feb 23 13:07:32.111149 master-0 kubenswrapper[17411]: I0223 13:07:32.111110 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce5a6b36-46f6-42b7-8240-ca27d4e47e30-cert\") pod \"ingress-canary-rhj5d\" (UID: \"ce5a6b36-46f6-42b7-8240-ca27d4e47e30\") " pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:32.162969 master-0 kubenswrapper[17411]: I0223 13:07:32.162787 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:32.190849 master-0 kubenswrapper[17411]: I0223 13:07:32.190778 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rhj5d" Feb 23 13:07:32.207238 master-0 kubenswrapper[17411]: I0223 13:07:32.207180 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d3aad4da-e494-4c10-9385-68f1c22a5a5a-monitoring-plugin-cert\") pod \"monitoring-plugin-688cc79566-cxl6b\" (UID: \"d3aad4da-e494-4c10-9385-68f1c22a5a5a\") " pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:32.309323 master-0 kubenswrapper[17411]: I0223 13:07:32.309238 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d3aad4da-e494-4c10-9385-68f1c22a5a5a-monitoring-plugin-cert\") pod \"monitoring-plugin-688cc79566-cxl6b\" (UID: \"d3aad4da-e494-4c10-9385-68f1c22a5a5a\") " pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:32.314415 master-0 kubenswrapper[17411]: I0223 13:07:32.314352 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d3aad4da-e494-4c10-9385-68f1c22a5a5a-monitoring-plugin-cert\") pod \"monitoring-plugin-688cc79566-cxl6b\" (UID: \"d3aad4da-e494-4c10-9385-68f1c22a5a5a\") " pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:32.433266 master-0 kubenswrapper[17411]: I0223 13:07:32.433082 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:32.737109 master-0 kubenswrapper[17411]: I0223 13:07:32.737040 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:07:32.745834 master-0 kubenswrapper[17411]: I0223 13:07:32.745747 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.752072 master-0 kubenswrapper[17411]: I0223 13:07:32.752021 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:07:32.752281 master-0 kubenswrapper[17411]: I0223 13:07:32.752119 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:07:32.752281 master-0 kubenswrapper[17411]: I0223 13:07:32.752256 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7q6an9sqsfn51" Feb 23 13:07:32.752388 master-0 kubenswrapper[17411]: I0223 13:07:32.752275 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:07:32.752388 master-0 kubenswrapper[17411]: I0223 13:07:32.752365 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-54m2k" Feb 23 13:07:32.752493 master-0 kubenswrapper[17411]: I0223 13:07:32.752284 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:07:32.752493 master-0 kubenswrapper[17411]: I0223 13:07:32.752407 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:07:32.752578 master-0 kubenswrapper[17411]: I0223 13:07:32.752512 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:07:32.752636 master-0 kubenswrapper[17411]: I0223 13:07:32.752608 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:07:32.752745 master-0 kubenswrapper[17411]: I0223 13:07:32.752720 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:07:32.753831 master-0 kubenswrapper[17411]: I0223 13:07:32.752829 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:07:32.756858 master-0 kubenswrapper[17411]: I0223 13:07:32.754073 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:07:32.757092 master-0 kubenswrapper[17411]: I0223 13:07:32.757030 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:07:32.776417 master-0 kubenswrapper[17411]: I0223 13:07:32.776365 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:07:32.824750 master-0 kubenswrapper[17411]: I0223 13:07:32.824647 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.824750 master-0 kubenswrapper[17411]: I0223 13:07:32.824754 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.824901 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.825014 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.825052 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.825164 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.825236 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825415 master-0 kubenswrapper[17411]: I0223 13:07:32.825332 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825645 master-0 kubenswrapper[17411]: I0223 13:07:32.825467 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825645 master-0 kubenswrapper[17411]: I0223 13:07:32.825507 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825645 master-0 kubenswrapper[17411]: I0223 13:07:32.825575 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825758 master-0 kubenswrapper[17411]: I0223 13:07:32.825604 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825758 master-0 kubenswrapper[17411]: I0223 13:07:32.825685 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825758 master-0 kubenswrapper[17411]: I0223 13:07:32.825748 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825915 master-0 kubenswrapper[17411]: I0223 13:07:32.825872 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.825964 master-0 kubenswrapper[17411]: I0223 13:07:32.825916 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmmh\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.826028 master-0 kubenswrapper[17411]: I0223 13:07:32.825999 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.826075 master-0 kubenswrapper[17411]: I0223 13:07:32.826053 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929030 master-0 kubenswrapper[17411]: I0223 13:07:32.928945 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929054 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929093 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929129 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929147 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929176 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929199 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929278 master-0 kubenswrapper[17411]: I0223 13:07:32.929217 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wmmh\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929305 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929328 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929346 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929371 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929413 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929440 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929461 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929578 master-0 kubenswrapper[17411]: I0223 13:07:32.929538 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929873 master-0 kubenswrapper[17411]: I0223 13:07:32.929589 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.929873 master-0 kubenswrapper[17411]: I0223 13:07:32.929639 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.933144 master-0 kubenswrapper[17411]: I0223 13:07:32.932532 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.933144 master-0 kubenswrapper[17411]: E0223 13:07:32.932588 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:33.432552043 +0000 UTC m=+46.860058700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:32.933382 master-0 kubenswrapper[17411]: I0223 13:07:32.933337 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.934904 master-0 kubenswrapper[17411]: I0223 13:07:32.934819 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.935176 master-0 kubenswrapper[17411]: I0223 13:07:32.935134 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.935334 master-0 kubenswrapper[17411]: I0223 13:07:32.935304 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.935512 master-0 kubenswrapper[17411]: I0223 13:07:32.935470 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.936971 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.937370 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.937813 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.938285 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.938827 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.939014 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.941302 master-0 kubenswrapper[17411]: I0223 13:07:32.939098 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.942455 master-0 kubenswrapper[17411]: I0223 13:07:32.942413 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.942854 master-0 kubenswrapper[17411]: I0223 13:07:32.942802 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.952618 master-0 kubenswrapper[17411]: I0223 13:07:32.952545 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:32.954887 master-0 kubenswrapper[17411]: I0223 13:07:32.954859 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wmmh\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:33.445129 master-0 kubenswrapper[17411]: I0223 13:07:33.442533 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:33.445129 master-0 kubenswrapper[17411]: E0223 13:07:33.442843 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:34.442816627 +0000 UTC m=+47.870323224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:33.500881 master-0 kubenswrapper[17411]: I0223 13:07:33.500828 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-688cc79566-cxl6b"] Feb 23 13:07:33.505262 master-0 kubenswrapper[17411]: W0223 13:07:33.505122 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3aad4da_e494_4c10_9385_68f1c22a5a5a.slice/crio-931662cee8c0f3fee4a65ba35dbe432f7597a59838dee7d324cf67e3014631ce WatchSource:0}: Error finding container 931662cee8c0f3fee4a65ba35dbe432f7597a59838dee7d324cf67e3014631ce: Status 404 returned error can't find the container with id 931662cee8c0f3fee4a65ba35dbe432f7597a59838dee7d324cf67e3014631ce Feb 23 13:07:33.519568 master-0 kubenswrapper[17411]: I0223 13:07:33.519496 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" event={"ID":"d3aad4da-e494-4c10-9385-68f1c22a5a5a","Type":"ContainerStarted","Data":"931662cee8c0f3fee4a65ba35dbe432f7597a59838dee7d324cf67e3014631ce"} Feb 23 13:07:33.533178 master-0 kubenswrapper[17411]: I0223 13:07:33.533104 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"3fa4b4c93814de12e0cf658ed18f5fe47cf68e7254b62888172607a474edda0a"} Feb 23 13:07:33.533178 master-0 kubenswrapper[17411]: I0223 13:07:33.533170 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"51eb69dfea3a0a2a50122ae47c3630b8541345657867bab44a8c8eb79299ba8a"} Feb 23 13:07:33.582644 master-0 kubenswrapper[17411]: I0223 13:07:33.582569 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-755ccb876-g7rtk"] Feb 23 13:07:33.590849 master-0 kubenswrapper[17411]: W0223 13:07:33.590755 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58867c81_d4c7_4740_84c5_cb399cf415a1.slice/crio-0a05a235b0cf70307b8a4a2f3069b4a327bdead4c93794443bc2817c7f467813 WatchSource:0}: Error finding container 0a05a235b0cf70307b8a4a2f3069b4a327bdead4c93794443bc2817c7f467813: Status 404 returned error can't find the container with id 0a05a235b0cf70307b8a4a2f3069b4a327bdead4c93794443bc2817c7f467813 Feb 23 13:07:33.597627 master-0 kubenswrapper[17411]: I0223 13:07:33.597592 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rhj5d"] Feb 23 13:07:33.605654 master-0 kubenswrapper[17411]: W0223 13:07:33.605621 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5a6b36_46f6_42b7_8240_ca27d4e47e30.slice/crio-cbe2aee05facf137823e9309de077a0bc2d775829a8d9fa63dfa14ac9e57ea35 WatchSource:0}: Error finding container cbe2aee05facf137823e9309de077a0bc2d775829a8d9fa63dfa14ac9e57ea35: Status 404 returned error can't find the container with id cbe2aee05facf137823e9309de077a0bc2d775829a8d9fa63dfa14ac9e57ea35 Feb 23 13:07:34.458874 master-0 kubenswrapper[17411]: I0223 13:07:34.458787 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:34.459462 master-0 kubenswrapper[17411]: E0223 13:07:34.459052 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:36.459016363 +0000 UTC m=+49.886522970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:34.544005 master-0 kubenswrapper[17411]: I0223 13:07:34.543936 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rhj5d" event={"ID":"ce5a6b36-46f6-42b7-8240-ca27d4e47e30","Type":"ContainerStarted","Data":"937c6947516447ef586b05737529b0b4c8d3c1d17879fd4fb6561eaae18f14ec"} Feb 23 13:07:34.544005 master-0 kubenswrapper[17411]: I0223 13:07:34.544000 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rhj5d" event={"ID":"ce5a6b36-46f6-42b7-8240-ca27d4e47e30","Type":"ContainerStarted","Data":"cbe2aee05facf137823e9309de077a0bc2d775829a8d9fa63dfa14ac9e57ea35"} Feb 23 13:07:34.548082 master-0 kubenswrapper[17411]: I0223 13:07:34.548034 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"79302b380d82d81f12d8fa3ee263898cf3147bd34b8958e70610c1d0b0665591"} Feb 23 13:07:34.550346 master-0 kubenswrapper[17411]: I0223 13:07:34.549938 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" event={"ID":"58867c81-d4c7-4740-84c5-cb399cf415a1","Type":"ContainerStarted","Data":"0a05a235b0cf70307b8a4a2f3069b4a327bdead4c93794443bc2817c7f467813"} Feb 23 13:07:34.565057 master-0 kubenswrapper[17411]: I0223 13:07:34.564990 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rhj5d" podStartSLOduration=35.56497348 podStartE2EDuration="35.56497348s" podCreationTimestamp="2026-02-23 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:34.561450421 +0000 UTC m=+47.988957018" watchObservedRunningTime="2026-02-23 13:07:34.56497348 +0000 UTC m=+47.992480077" Feb 23 13:07:35.182207 master-0 kubenswrapper[17411]: I0223 13:07:35.181481 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:35.182207 master-0 kubenswrapper[17411]: E0223 13:07:35.181718 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:43.181697346 +0000 UTC m=+56.609203943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:35.558972 master-0 kubenswrapper[17411]: I0223 13:07:35.558600 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"fafee058759ea044828b0639e765596400c3df1493691b41f9bb39519ace4282"} Feb 23 13:07:35.560265 master-0 kubenswrapper[17411]: I0223 13:07:35.560219 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" event={"ID":"d3aad4da-e494-4c10-9385-68f1c22a5a5a","Type":"ContainerStarted","Data":"656ef7feee73e1119791083883e90386686c236540e2b42c1930ad1adc826d93"} Feb 23 13:07:35.561660 master-0 kubenswrapper[17411]: I0223 13:07:35.561618 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:35.563835 master-0 kubenswrapper[17411]: I0223 13:07:35.563798 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" event={"ID":"58867c81-d4c7-4740-84c5-cb399cf415a1","Type":"ContainerStarted","Data":"ad5ad4641a88b620fe0a60e06a6aa60e0dc5d8d6ccbbf7b31a2432724a798b41"} Feb 23 13:07:35.564017 master-0 kubenswrapper[17411]: I0223 13:07:35.563987 17411 patch_prober.go:28] interesting pod/monitoring-plugin-688cc79566-cxl6b container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.85:9443/health\": dial tcp 10.128.0.85:9443: connect: connection refused" start-of-body= Feb 23 13:07:35.564088 master-0 kubenswrapper[17411]: I0223 13:07:35.564038 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" podUID="d3aad4da-e494-4c10-9385-68f1c22a5a5a" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.128.0.85:9443/health\": dial tcp 10.128.0.85:9443: connect: connection refused" Feb 23 13:07:35.590558 master-0 kubenswrapper[17411]: I0223 13:07:35.589140 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" podStartSLOduration=1.794230814 podStartE2EDuration="3.589118611s" podCreationTimestamp="2026-02-23 13:07:32 +0000 UTC" firstStartedPulling="2026-02-23 13:07:33.510485355 +0000 UTC m=+46.937991952" lastFinishedPulling="2026-02-23 13:07:35.305373152 +0000 UTC m=+48.732879749" observedRunningTime="2026-02-23 13:07:35.58484326 +0000 UTC m=+49.012349877" watchObservedRunningTime="2026-02-23 13:07:35.589118611 +0000 UTC m=+49.016625228" Feb 23 13:07:36.197746 master-0 kubenswrapper[17411]: I0223 13:07:36.197673 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:07:36.198311 master-0 kubenswrapper[17411]: E0223 13:07:36.198108 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:08.197994734 +0000 UTC m=+81.625501361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:36.464922 master-0 kubenswrapper[17411]: I0223 13:07:36.464723 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" podStartSLOduration=3.735674511 podStartE2EDuration="5.464687202s" podCreationTimestamp="2026-02-23 13:07:31 +0000 UTC" firstStartedPulling="2026-02-23 13:07:33.601286264 +0000 UTC m=+47.028792861" lastFinishedPulling="2026-02-23 13:07:35.330298955 +0000 UTC m=+48.757805552" observedRunningTime="2026-02-23 13:07:35.602293192 +0000 UTC m=+49.029799839" watchObservedRunningTime="2026-02-23 13:07:36.464687202 +0000 UTC m=+49.892193839" Feb 23 13:07:36.469352 master-0 kubenswrapper[17411]: I0223 13:07:36.469286 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:07:36.470878 master-0 kubenswrapper[17411]: I0223 13:07:36.470834 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.473470 master-0 kubenswrapper[17411]: I0223 13:07:36.473410 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-q2chk" Feb 23 13:07:36.473662 master-0 kubenswrapper[17411]: I0223 13:07:36.473599 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:07:36.476407 master-0 kubenswrapper[17411]: I0223 13:07:36.476365 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:07:36.506512 master-0 kubenswrapper[17411]: I0223 13:07:36.506450 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:36.506752 master-0 kubenswrapper[17411]: E0223 13:07:36.506648 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:40.506615534 +0000 UTC m=+53.934122171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:36.578223 master-0 kubenswrapper[17411]: I0223 13:07:36.578105 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"0250bc2fe6a231c87b2f9628f8973320006cb01187ca6b7d808cd17b07e13ad9"} Feb 23 13:07:36.578223 master-0 kubenswrapper[17411]: I0223 13:07:36.578210 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" event={"ID":"a75814ac-2491-4c2b-9c62-de6cd5023f5b","Type":"ContainerStarted","Data":"4334c8c3528057ac7af092773c3feb47bcd849893a8832443de4138d9a5339cc"} Feb 23 13:07:36.578998 master-0 kubenswrapper[17411]: I0223 13:07:36.578887 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:36.584610 master-0 kubenswrapper[17411]: I0223 13:07:36.584567 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-688cc79566-cxl6b" Feb 23 13:07:36.608391 master-0 kubenswrapper[17411]: I0223 13:07:36.608316 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" podStartSLOduration=2.464456158 podStartE2EDuration="7.60829658s" podCreationTimestamp="2026-02-23 13:07:29 +0000 UTC" firstStartedPulling="2026-02-23 13:07:30.156999072 +0000 UTC m=+43.584505689" lastFinishedPulling="2026-02-23 13:07:35.300839524 +0000 UTC m=+48.728346111" observedRunningTime="2026-02-23 13:07:36.605622385 +0000 UTC m=+50.033129002" watchObservedRunningTime="2026-02-23 13:07:36.60829658 +0000 UTC m=+50.035803187" Feb 23 13:07:36.608602 master-0 kubenswrapper[17411]: I0223 13:07:36.608567 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.608873 master-0 kubenswrapper[17411]: I0223 13:07:36.608801 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.608939 master-0 kubenswrapper[17411]: I0223 13:07:36.608917 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.710759 master-0 kubenswrapper[17411]: I0223 13:07:36.710678 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.711077 master-0 kubenswrapper[17411]: I0223 13:07:36.710777 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.711077 master-0 kubenswrapper[17411]: I0223 13:07:36.710812 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.712077 master-0 kubenswrapper[17411]: I0223 13:07:36.712052 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.712494 master-0 kubenswrapper[17411]: I0223 13:07:36.712429 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.734640 master-0 kubenswrapper[17411]: I0223 13:07:36.734489 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:36.790524 master-0 kubenswrapper[17411]: I0223 13:07:36.790425 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:07:37.261627 master-0 kubenswrapper[17411]: I0223 13:07:37.261468 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 23 13:07:37.588592 master-0 kubenswrapper[17411]: I0223 13:07:37.588195 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"649c8f56-22ef-4e68-bc9b-9d608fba998c","Type":"ContainerStarted","Data":"f1521dc299e825db85f41a3f5ce09ee770285ed9eca4a5f73654268f61fd88f9"} Feb 23 13:07:38.596825 master-0 kubenswrapper[17411]: I0223 13:07:38.596729 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"649c8f56-22ef-4e68-bc9b-9d608fba998c","Type":"ContainerStarted","Data":"0ad530397d7e0906f92bdc82f78dbc6b9a8f87e05a0492ec16d7cc020ef72a12"} Feb 23 13:07:38.619695 master-0 kubenswrapper[17411]: I0223 13:07:38.619602 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.6195787470000003 podStartE2EDuration="2.619578747s" podCreationTimestamp="2026-02-23 13:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:38.617869839 +0000 UTC m=+52.045376456" watchObservedRunningTime="2026-02-23 13:07:38.619578747 +0000 UTC m=+52.047085354" Feb 23 13:07:39.720707 master-0 kubenswrapper[17411]: I0223 13:07:39.720623 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5b5fbd9b56-86rpd" Feb 23 13:07:40.578137 master-0 kubenswrapper[17411]: I0223 13:07:40.578050 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:40.578511 master-0 kubenswrapper[17411]: E0223 13:07:40.578434 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:48.578387955 +0000 UTC m=+62.005894602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:43.224184 master-0 kubenswrapper[17411]: I0223 13:07:43.224112 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:43.224963 master-0 kubenswrapper[17411]: E0223 13:07:43.224406 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:07:59.224363473 +0000 UTC m=+72.651870100 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:46.811809 master-0 kubenswrapper[17411]: I0223 13:07:46.811738 17411 scope.go:117] "RemoveContainer" containerID="7e9526f21d0004f4be338f194dd1d8ef03df5393e98a9f29994fc1a1aea54d33" Feb 23 13:07:46.848388 master-0 kubenswrapper[17411]: I0223 13:07:46.848305 17411 scope.go:117] "RemoveContainer" containerID="128581ddbe7657ebd83ea9ba25a542fc8f1d7245b7d7a38fdcce26195377d53b" Feb 23 13:07:46.879868 master-0 kubenswrapper[17411]: I0223 13:07:46.879822 17411 scope.go:117] "RemoveContainer" containerID="321eaf326ad8a489a13ada6c53cf34c2c99e6344cfe3f0727f5eef006f9c7e8e" Feb 23 13:07:46.901912 master-0 kubenswrapper[17411]: I0223 13:07:46.901857 17411 scope.go:117] "RemoveContainer" containerID="6f08e1116d82edc6d1a5a54978dd03f762876e6846750a14b497bad3e1b62afe" Feb 23 13:07:48.618692 master-0 kubenswrapper[17411]: I0223 13:07:48.618613 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:07:48.619950 master-0 kubenswrapper[17411]: E0223 13:07:48.618892 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:04.618864601 +0000 UTC m=+78.046371228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:07:50.492380 master-0 kubenswrapper[17411]: I0223 13:07:50.492299 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:07:50.492956 master-0 kubenswrapper[17411]: I0223 13:07:50.492577 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" containerID="cri-o://156ba0e4f441ce67c6a903cbeb763ed72ee61489eac14300f0897eae83857ad8" gracePeriod=30 Feb 23 13:07:50.519579 master-0 kubenswrapper[17411]: I0223 13:07:50.519512 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:07:50.519807 master-0 kubenswrapper[17411]: I0223 13:07:50.519756 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" podUID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" containerName="route-controller-manager" containerID="cri-o://022c9b5345f424d899a3eb1c0e7a0d156bb27c5c3be0d99e29d7ec4cb8956ba6" gracePeriod=30 Feb 23 13:07:50.694263 master-0 kubenswrapper[17411]: I0223 13:07:50.694186 17411 generic.go:334] "Generic (PLEG): container finished" podID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" containerID="022c9b5345f424d899a3eb1c0e7a0d156bb27c5c3be0d99e29d7ec4cb8956ba6" exitCode=0 Feb 23 13:07:50.694451 master-0 kubenswrapper[17411]: I0223 13:07:50.694306 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" event={"ID":"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa","Type":"ContainerDied","Data":"022c9b5345f424d899a3eb1c0e7a0d156bb27c5c3be0d99e29d7ec4cb8956ba6"} Feb 23 13:07:50.696573 master-0 kubenswrapper[17411]: I0223 13:07:50.696536 17411 generic.go:334] "Generic (PLEG): container finished" podID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerID="156ba0e4f441ce67c6a903cbeb763ed72ee61489eac14300f0897eae83857ad8" exitCode=0 Feb 23 13:07:50.696629 master-0 kubenswrapper[17411]: I0223 13:07:50.696579 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerDied","Data":"156ba0e4f441ce67c6a903cbeb763ed72ee61489eac14300f0897eae83857ad8"} Feb 23 13:07:50.696629 master-0 kubenswrapper[17411]: I0223 13:07:50.696618 17411 scope.go:117] "RemoveContainer" containerID="cb2d2d4fb80101957c4b13b6c2b179a921353fd0e5984e898b9fcd6ec41fc1bb" Feb 23 13:07:51.030270 master-0 kubenswrapper[17411]: I0223 13:07:51.030200 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:07:51.056021 master-0 kubenswrapper[17411]: I0223 13:07:51.055974 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:07:51.056583 master-0 kubenswrapper[17411]: I0223 13:07:51.056546 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") pod \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " Feb 23 13:07:51.056653 master-0 kubenswrapper[17411]: I0223 13:07:51.056592 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") pod \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " Feb 23 13:07:51.056653 master-0 kubenswrapper[17411]: I0223 13:07:51.056633 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") pod \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " Feb 23 13:07:51.056733 master-0 kubenswrapper[17411]: I0223 13:07:51.056653 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") pod \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " Feb 23 13:07:51.056879 master-0 kubenswrapper[17411]: I0223 13:07:51.056824 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") pod \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " Feb 23 13:07:51.056928 master-0 kubenswrapper[17411]: I0223 13:07:51.056914 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") pod \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\" (UID: \"18b48459-51ad-4b0d-8608-4ba6d3fa8e16\") " Feb 23 13:07:51.057082 master-0 kubenswrapper[17411]: I0223 13:07:51.057044 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") pod \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " Feb 23 13:07:51.057144 master-0 kubenswrapper[17411]: I0223 13:07:51.057111 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") pod \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " Feb 23 13:07:51.057144 master-0 kubenswrapper[17411]: I0223 13:07:51.057115 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca" (OuterVolumeSpecName: "client-ca") pod "18b48459-51ad-4b0d-8608-4ba6d3fa8e16" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:51.057340 master-0 kubenswrapper[17411]: I0223 13:07:51.057222 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") pod \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\" (UID: \"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa\") " Feb 23 13:07:51.057419 master-0 kubenswrapper[17411]: I0223 13:07:51.057374 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config" (OuterVolumeSpecName: "config") pod "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" (UID: "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:51.057419 master-0 kubenswrapper[17411]: I0223 13:07:51.057388 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "18b48459-51ad-4b0d-8608-4ba6d3fa8e16" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:51.058012 master-0 kubenswrapper[17411]: I0223 13:07:51.057973 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca" (OuterVolumeSpecName: "client-ca") pod "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" (UID: "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:51.058300 master-0 kubenswrapper[17411]: I0223 13:07:51.057676 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config" (OuterVolumeSpecName: "config") pod "18b48459-51ad-4b0d-8608-4ba6d3fa8e16" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:07:51.058371 master-0 kubenswrapper[17411]: I0223 13:07:51.058342 17411 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.058371 master-0 kubenswrapper[17411]: I0223 13:07:51.058367 17411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.058450 master-0 kubenswrapper[17411]: I0223 13:07:51.058381 17411 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.058450 master-0 kubenswrapper[17411]: I0223 13:07:51.058397 17411 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.060340 master-0 kubenswrapper[17411]: I0223 13:07:51.060300 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr" (OuterVolumeSpecName: "kube-api-access-8c4jr") pod "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" (UID: "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa"). InnerVolumeSpecName "kube-api-access-8c4jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:07:51.060462 master-0 kubenswrapper[17411]: I0223 13:07:51.060416 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18b48459-51ad-4b0d-8608-4ba6d3fa8e16" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:07:51.060629 master-0 kubenswrapper[17411]: I0223 13:07:51.060568 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc" (OuterVolumeSpecName: "kube-api-access-cjpkc") pod "18b48459-51ad-4b0d-8608-4ba6d3fa8e16" (UID: "18b48459-51ad-4b0d-8608-4ba6d3fa8e16"). InnerVolumeSpecName "kube-api-access-cjpkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:07:51.060974 master-0 kubenswrapper[17411]: I0223 13:07:51.060944 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" (UID: "b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:07:51.160215 master-0 kubenswrapper[17411]: I0223 13:07:51.159996 17411 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.160215 master-0 kubenswrapper[17411]: I0223 13:07:51.160061 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c4jr\" (UniqueName: \"kubernetes.io/projected/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa-kube-api-access-8c4jr\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.160215 master-0 kubenswrapper[17411]: I0223 13:07:51.160081 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjpkc\" (UniqueName: \"kubernetes.io/projected/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-kube-api-access-cjpkc\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.160215 master-0 kubenswrapper[17411]: I0223 13:07:51.160095 17411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.160215 master-0 kubenswrapper[17411]: I0223 13:07:51.160107 17411 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b48459-51ad-4b0d-8608-4ba6d3fa8e16-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:51.705458 master-0 kubenswrapper[17411]: I0223 13:07:51.705347 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" event={"ID":"b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa","Type":"ContainerDied","Data":"9933c3953079b9e9be4ada69849d6fdb342498ae2f03fc5ebff1e04b6c03839b"} Feb 23 13:07:51.705458 master-0 kubenswrapper[17411]: I0223 13:07:51.705405 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2" Feb 23 13:07:51.705458 master-0 kubenswrapper[17411]: I0223 13:07:51.705459 17411 scope.go:117] "RemoveContainer" containerID="022c9b5345f424d899a3eb1c0e7a0d156bb27c5c3be0d99e29d7ec4cb8956ba6" Feb 23 13:07:51.707328 master-0 kubenswrapper[17411]: I0223 13:07:51.707084 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" event={"ID":"18b48459-51ad-4b0d-8608-4ba6d3fa8e16","Type":"ContainerDied","Data":"b279587ff3b533f90c8598bc9cab9d154d09bb9caaf9f198b885d5940932b084"} Feb 23 13:07:51.707328 master-0 kubenswrapper[17411]: I0223 13:07:51.707166 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59947b7887-xg2ln" Feb 23 13:07:51.741320 master-0 kubenswrapper[17411]: I0223 13:07:51.741232 17411 scope.go:117] "RemoveContainer" containerID="156ba0e4f441ce67c6a903cbeb763ed72ee61489eac14300f0897eae83857ad8" Feb 23 13:07:51.868724 master-0 kubenswrapper[17411]: I0223 13:07:51.868629 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:07:51.880900 master-0 kubenswrapper[17411]: I0223 13:07:51.880767 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59947b7887-xg2ln"] Feb 23 13:07:51.895373 master-0 kubenswrapper[17411]: I0223 13:07:51.895281 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:07:51.908177 master-0 kubenswrapper[17411]: I0223 13:07:51.908113 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ccc6b554-znpw2"] Feb 23 13:07:52.063234 master-0 kubenswrapper[17411]: I0223 13:07:52.060933 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65d5554fbd-fw5c9"] Feb 23 13:07:52.064042 master-0 kubenswrapper[17411]: E0223 13:07:52.063944 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064042 master-0 kubenswrapper[17411]: I0223 13:07:52.064034 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064210 master-0 kubenswrapper[17411]: E0223 13:07:52.064062 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" containerName="route-controller-manager" Feb 23 13:07:52.064210 master-0 kubenswrapper[17411]: I0223 13:07:52.064083 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" containerName="route-controller-manager" Feb 23 13:07:52.064210 master-0 kubenswrapper[17411]: E0223 13:07:52.064124 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064210 master-0 kubenswrapper[17411]: I0223 13:07:52.064143 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064493 master-0 kubenswrapper[17411]: I0223 13:07:52.064465 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064562 master-0 kubenswrapper[17411]: I0223 13:07:52.064502 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" containerName="controller-manager" Feb 23 13:07:52.064562 master-0 kubenswrapper[17411]: I0223 13:07:52.064544 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" containerName="route-controller-manager" Feb 23 13:07:52.065452 master-0 kubenswrapper[17411]: I0223 13:07:52.065393 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8"] Feb 23 13:07:52.065661 master-0 kubenswrapper[17411]: I0223 13:07:52.065605 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.067070 master-0 kubenswrapper[17411]: I0223 13:07:52.067014 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.069441 master-0 kubenswrapper[17411]: I0223 13:07:52.069379 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:07:52.071460 master-0 kubenswrapper[17411]: I0223 13:07:52.071408 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-n8vwz" Feb 23 13:07:52.072109 master-0 kubenswrapper[17411]: I0223 13:07:52.072031 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:07:52.072527 master-0 kubenswrapper[17411]: I0223 13:07:52.072469 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wt8dr" Feb 23 13:07:52.072915 master-0 kubenswrapper[17411]: I0223 13:07:52.072860 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:07:52.073723 master-0 kubenswrapper[17411]: I0223 13:07:52.073664 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:07:52.074199 master-0 kubenswrapper[17411]: I0223 13:07:52.074157 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:07:52.074338 master-0 kubenswrapper[17411]: I0223 13:07:52.074272 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:07:52.074863 master-0 kubenswrapper[17411]: I0223 13:07:52.074824 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:07:52.074863 master-0 kubenswrapper[17411]: I0223 13:07:52.074826 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:07:52.075009 master-0 kubenswrapper[17411]: I0223 13:07:52.074989 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:07:52.075190 master-0 kubenswrapper[17411]: I0223 13:07:52.075131 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:07:52.075448 master-0 kubenswrapper[17411]: I0223 13:07:52.075402 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-config\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.075842 master-0 kubenswrapper[17411]: I0223 13:07:52.075764 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-serving-cert\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.076044 master-0 kubenswrapper[17411]: I0223 13:07:52.075975 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmgf\" (UniqueName: \"kubernetes.io/projected/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-kube-api-access-zvmgf\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.076274 master-0 kubenswrapper[17411]: I0223 13:07:52.076189 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-proxy-ca-bundles\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.076368 master-0 kubenswrapper[17411]: I0223 13:07:52.076271 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-client-ca\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.083475 master-0 kubenswrapper[17411]: I0223 13:07:52.083419 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:07:52.154910 master-0 kubenswrapper[17411]: I0223 13:07:52.154824 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8"] Feb 23 13:07:52.158271 master-0 kubenswrapper[17411]: I0223 13:07:52.158197 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65d5554fbd-fw5c9"] Feb 23 13:07:52.163996 master-0 kubenswrapper[17411]: I0223 13:07:52.163934 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:52.164166 master-0 kubenswrapper[17411]: I0223 13:07:52.164108 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:07:52.178394 master-0 kubenswrapper[17411]: I0223 13:07:52.178351 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmgf\" (UniqueName: \"kubernetes.io/projected/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-kube-api-access-zvmgf\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.178528 master-0 kubenswrapper[17411]: I0223 13:07:52.178436 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-serving-cert\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.178599 master-0 kubenswrapper[17411]: I0223 13:07:52.178581 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-proxy-ca-bundles\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.178663 master-0 kubenswrapper[17411]: I0223 13:07:52.178617 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-client-ca\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.179012 master-0 kubenswrapper[17411]: I0223 13:07:52.178950 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-client-ca\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.179158 master-0 kubenswrapper[17411]: I0223 13:07:52.179103 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-config\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.179408 master-0 kubenswrapper[17411]: I0223 13:07:52.179346 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ggg\" (UniqueName: \"kubernetes.io/projected/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-kube-api-access-h5ggg\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.179623 master-0 kubenswrapper[17411]: I0223 13:07:52.179575 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-config\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.179705 master-0 kubenswrapper[17411]: I0223 13:07:52.179637 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-serving-cert\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.179890 master-0 kubenswrapper[17411]: I0223 13:07:52.179829 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-client-ca\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.180622 master-0 kubenswrapper[17411]: I0223 13:07:52.180579 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-proxy-ca-bundles\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.181226 master-0 kubenswrapper[17411]: I0223 13:07:52.181179 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-config\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.183216 master-0 kubenswrapper[17411]: I0223 13:07:52.183173 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-serving-cert\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.247370 master-0 kubenswrapper[17411]: I0223 13:07:52.247295 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmgf\" (UniqueName: \"kubernetes.io/projected/bfa537d0-11d0-4e8d-8b0e-bd5959f586f4-kube-api-access-zvmgf\") pod \"controller-manager-65d5554fbd-fw5c9\" (UID: \"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4\") " pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.280449 master-0 kubenswrapper[17411]: I0223 13:07:52.280397 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5ggg\" (UniqueName: \"kubernetes.io/projected/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-kube-api-access-h5ggg\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.280660 master-0 kubenswrapper[17411]: I0223 13:07:52.280485 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-config\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.280698 master-0 kubenswrapper[17411]: I0223 13:07:52.280678 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-serving-cert\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.280946 master-0 kubenswrapper[17411]: I0223 13:07:52.280892 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-client-ca\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.281856 master-0 kubenswrapper[17411]: I0223 13:07:52.281827 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-client-ca\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.282018 master-0 kubenswrapper[17411]: I0223 13:07:52.281991 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-config\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.284973 master-0 kubenswrapper[17411]: I0223 13:07:52.284885 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-serving-cert\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.303089 master-0 kubenswrapper[17411]: I0223 13:07:52.303039 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5ggg\" (UniqueName: \"kubernetes.io/projected/dc1620b0-3903-418b-9dd2-1f99bc5a0ae8-kube-api-access-h5ggg\") pod \"route-controller-manager-78784b9d57-r4sf8\" (UID: \"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8\") " pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.426882 master-0 kubenswrapper[17411]: I0223 13:07:52.426761 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:52.514429 master-0 kubenswrapper[17411]: I0223 13:07:52.514340 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:52.817369 master-0 kubenswrapper[17411]: I0223 13:07:52.817300 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65d5554fbd-fw5c9"] Feb 23 13:07:52.822979 master-0 kubenswrapper[17411]: W0223 13:07:52.822906 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfa537d0_11d0_4e8d_8b0e_bd5959f586f4.slice/crio-ecdf4215361fd36bc9144d1eeafff4ee5a9e742d88fb760b1046ac16c53f40ac WatchSource:0}: Error finding container ecdf4215361fd36bc9144d1eeafff4ee5a9e742d88fb760b1046ac16c53f40ac: Status 404 returned error can't find the container with id ecdf4215361fd36bc9144d1eeafff4ee5a9e742d88fb760b1046ac16c53f40ac Feb 23 13:07:52.881592 master-0 kubenswrapper[17411]: I0223 13:07:52.881529 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b48459-51ad-4b0d-8608-4ba6d3fa8e16" path="/var/lib/kubelet/pods/18b48459-51ad-4b0d-8608-4ba6d3fa8e16/volumes" Feb 23 13:07:52.883996 master-0 kubenswrapper[17411]: I0223 13:07:52.883950 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa" path="/var/lib/kubelet/pods/b53d3c98-e99c-4f4e-a9dc-91e3ad30efaa/volumes" Feb 23 13:07:52.924184 master-0 kubenswrapper[17411]: I0223 13:07:52.924123 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8"] Feb 23 13:07:52.931507 master-0 kubenswrapper[17411]: W0223 13:07:52.931452 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc1620b0_3903_418b_9dd2_1f99bc5a0ae8.slice/crio-c3750def6ede99d36d1858ef113a3d16266cecb2b0b2268746648d4820fee65f WatchSource:0}: Error finding container c3750def6ede99d36d1858ef113a3d16266cecb2b0b2268746648d4820fee65f: Status 404 returned error can't find the container with id c3750def6ede99d36d1858ef113a3d16266cecb2b0b2268746648d4820fee65f Feb 23 13:07:53.729063 master-0 kubenswrapper[17411]: I0223 13:07:53.728980 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" event={"ID":"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4","Type":"ContainerStarted","Data":"d7cecdb78483464ca842eef33778c826aa1ed5cf76ce100a4441589d8e22de94"} Feb 23 13:07:53.729063 master-0 kubenswrapper[17411]: I0223 13:07:53.729073 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" event={"ID":"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4","Type":"ContainerStarted","Data":"ecdf4215361fd36bc9144d1eeafff4ee5a9e742d88fb760b1046ac16c53f40ac"} Feb 23 13:07:53.729672 master-0 kubenswrapper[17411]: I0223 13:07:53.729568 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:53.732432 master-0 kubenswrapper[17411]: I0223 13:07:53.732350 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerStarted","Data":"2ce8dd30e28f7373e2d6bc5d3ffecbad9102db5068c6325288481dd16f27c6a9"} Feb 23 13:07:53.732432 master-0 kubenswrapper[17411]: I0223 13:07:53.732418 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerStarted","Data":"c3750def6ede99d36d1858ef113a3d16266cecb2b0b2268746648d4820fee65f"} Feb 23 13:07:53.732814 master-0 kubenswrapper[17411]: I0223 13:07:53.732778 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:53.734936 master-0 kubenswrapper[17411]: I0223 13:07:53.734884 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:07:53.738696 master-0 kubenswrapper[17411]: I0223 13:07:53.738646 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:07:53.750614 master-0 kubenswrapper[17411]: I0223 13:07:53.750542 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" podStartSLOduration=3.75052428 podStartE2EDuration="3.75052428s" podCreationTimestamp="2026-02-23 13:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:53.749691166 +0000 UTC m=+67.177197783" watchObservedRunningTime="2026-02-23 13:07:53.75052428 +0000 UTC m=+67.178030877" Feb 23 13:07:53.792378 master-0 kubenswrapper[17411]: I0223 13:07:53.790107 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podStartSLOduration=3.790084025 podStartE2EDuration="3.790084025s" podCreationTimestamp="2026-02-23 13:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:07:53.788430158 +0000 UTC m=+67.215936775" watchObservedRunningTime="2026-02-23 13:07:53.790084025 +0000 UTC m=+67.217590612" Feb 23 13:07:54.743812 master-0 kubenswrapper[17411]: I0223 13:07:54.743771 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:07:54.744870 master-0 kubenswrapper[17411]: I0223 13:07:54.744794 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager" containerID="cri-o://6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" gracePeriod=30 Feb 23 13:07:54.744870 master-0 kubenswrapper[17411]: I0223 13:07:54.744818 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" gracePeriod=30 Feb 23 13:07:54.744988 master-0 kubenswrapper[17411]: I0223 13:07:54.744878 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" gracePeriod=30 Feb 23 13:07:54.745099 master-0 kubenswrapper[17411]: I0223 13:07:54.745020 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="cluster-policy-controller" containerID="cri-o://dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" gracePeriod=30 Feb 23 13:07:54.745711 master-0 kubenswrapper[17411]: I0223 13:07:54.745527 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:07:54.745893 master-0 kubenswrapper[17411]: E0223 13:07:54.745863 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager" Feb 23 13:07:54.745893 master-0 kubenswrapper[17411]: I0223 13:07:54.745886 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: E0223 13:07:54.745902 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-cert-syncer" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: I0223 13:07:54.745910 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-cert-syncer" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: E0223 13:07:54.745917 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-recovery-controller" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: I0223 13:07:54.745924 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-recovery-controller" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: E0223 13:07:54.745953 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="cluster-policy-controller" Feb 23 13:07:54.745971 master-0 kubenswrapper[17411]: I0223 13:07:54.745959 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="cluster-policy-controller" Feb 23 13:07:54.746176 master-0 kubenswrapper[17411]: I0223 13:07:54.746082 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-cert-syncer" Feb 23 13:07:54.746176 master-0 kubenswrapper[17411]: I0223 13:07:54.746104 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="cluster-policy-controller" Feb 23 13:07:54.746176 master-0 kubenswrapper[17411]: I0223 13:07:54.746118 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager-recovery-controller" Feb 23 13:07:54.746176 master-0 kubenswrapper[17411]: I0223 13:07:54.746137 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c8e14cb165534672d5ddc06061f8f2" containerName="kube-controller-manager" Feb 23 13:07:54.923933 master-0 kubenswrapper[17411]: I0223 13:07:54.923859 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:54.924838 master-0 kubenswrapper[17411]: I0223 13:07:54.924760 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:54.934544 master-0 kubenswrapper[17411]: I0223 13:07:54.934506 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_05c8e14cb165534672d5ddc06061f8f2/kube-controller-manager-cert-syncer/0.log" Feb 23 13:07:54.935577 master-0 kubenswrapper[17411]: I0223 13:07:54.935538 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:54.940079 master-0 kubenswrapper[17411]: I0223 13:07:54.940026 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="05c8e14cb165534672d5ddc06061f8f2" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:07:55.026498 master-0 kubenswrapper[17411]: I0223 13:07:55.026381 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:55.026675 master-0 kubenswrapper[17411]: I0223 13:07:55.026505 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:55.026675 master-0 kubenswrapper[17411]: I0223 13:07:55.026668 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:55.026760 master-0 kubenswrapper[17411]: I0223 13:07:55.026725 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:55.127972 master-0 kubenswrapper[17411]: I0223 13:07:55.127917 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") pod \"05c8e14cb165534672d5ddc06061f8f2\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " Feb 23 13:07:55.128175 master-0 kubenswrapper[17411]: I0223 13:07:55.128014 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "05c8e14cb165534672d5ddc06061f8f2" (UID: "05c8e14cb165534672d5ddc06061f8f2"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:55.128175 master-0 kubenswrapper[17411]: I0223 13:07:55.128122 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") pod \"05c8e14cb165534672d5ddc06061f8f2\" (UID: \"05c8e14cb165534672d5ddc06061f8f2\") " Feb 23 13:07:55.128294 master-0 kubenswrapper[17411]: I0223 13:07:55.128241 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "05c8e14cb165534672d5ddc06061f8f2" (UID: "05c8e14cb165534672d5ddc06061f8f2"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:55.128543 master-0 kubenswrapper[17411]: I0223 13:07:55.128515 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:55.128543 master-0 kubenswrapper[17411]: I0223 13:07:55.128538 17411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05c8e14cb165534672d5ddc06061f8f2-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:55.746311 master-0 kubenswrapper[17411]: I0223 13:07:55.746266 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_05c8e14cb165534672d5ddc06061f8f2/kube-controller-manager-cert-syncer/0.log" Feb 23 13:07:55.747429 master-0 kubenswrapper[17411]: I0223 13:07:55.747374 17411 generic.go:334] "Generic (PLEG): container finished" podID="05c8e14cb165534672d5ddc06061f8f2" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" exitCode=0 Feb 23 13:07:55.747489 master-0 kubenswrapper[17411]: I0223 13:07:55.747431 17411 generic.go:334] "Generic (PLEG): container finished" podID="05c8e14cb165534672d5ddc06061f8f2" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" exitCode=2 Feb 23 13:07:55.747489 master-0 kubenswrapper[17411]: I0223 13:07:55.747450 17411 generic.go:334] "Generic (PLEG): container finished" podID="05c8e14cb165534672d5ddc06061f8f2" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" exitCode=0 Feb 23 13:07:55.747489 master-0 kubenswrapper[17411]: I0223 13:07:55.747458 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:07:55.747596 master-0 kubenswrapper[17411]: I0223 13:07:55.747520 17411 scope.go:117] "RemoveContainer" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.747596 master-0 kubenswrapper[17411]: I0223 13:07:55.747463 17411 generic.go:334] "Generic (PLEG): container finished" podID="05c8e14cb165534672d5ddc06061f8f2" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" exitCode=0 Feb 23 13:07:55.750701 master-0 kubenswrapper[17411]: I0223 13:07:55.750560 17411 generic.go:334] "Generic (PLEG): container finished" podID="93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" containerID="9f10bceb7445336e1df66d48a02ebd47ea2dc043a12ac6b767935a8559b8145f" exitCode=0 Feb 23 13:07:55.750701 master-0 kubenswrapper[17411]: I0223 13:07:55.750646 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b","Type":"ContainerDied","Data":"9f10bceb7445336e1df66d48a02ebd47ea2dc043a12ac6b767935a8559b8145f"} Feb 23 13:07:55.754587 master-0 kubenswrapper[17411]: I0223 13:07:55.754464 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="05c8e14cb165534672d5ddc06061f8f2" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:07:55.766529 master-0 kubenswrapper[17411]: I0223 13:07:55.766489 17411 scope.go:117] "RemoveContainer" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.781162 master-0 kubenswrapper[17411]: I0223 13:07:55.781110 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="05c8e14cb165534672d5ddc06061f8f2" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:07:55.787164 master-0 kubenswrapper[17411]: I0223 13:07:55.787106 17411 scope.go:117] "RemoveContainer" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.808900 master-0 kubenswrapper[17411]: I0223 13:07:55.808847 17411 scope.go:117] "RemoveContainer" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.826546 master-0 kubenswrapper[17411]: I0223 13:07:55.826500 17411 scope.go:117] "RemoveContainer" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.827350 master-0 kubenswrapper[17411]: E0223 13:07:55.827319 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": container with ID starting with 1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc not found: ID does not exist" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.827410 master-0 kubenswrapper[17411]: I0223 13:07:55.827354 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} err="failed to get container status \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": rpc error: code = NotFound desc = could not find container \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": container with ID starting with 1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc not found: ID does not exist" Feb 23 13:07:55.827410 master-0 kubenswrapper[17411]: I0223 13:07:55.827385 17411 scope.go:117] "RemoveContainer" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.827971 master-0 kubenswrapper[17411]: E0223 13:07:55.827928 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": container with ID starting with 1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc not found: ID does not exist" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.828036 master-0 kubenswrapper[17411]: I0223 13:07:55.827983 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} err="failed to get container status \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": rpc error: code = NotFound desc = could not find container \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": container with ID starting with 1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc not found: ID does not exist" Feb 23 13:07:55.828036 master-0 kubenswrapper[17411]: I0223 13:07:55.828015 17411 scope.go:117] "RemoveContainer" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.828502 master-0 kubenswrapper[17411]: E0223 13:07:55.828461 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": container with ID starting with dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b not found: ID does not exist" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.828553 master-0 kubenswrapper[17411]: I0223 13:07:55.828501 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} err="failed to get container status \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": rpc error: code = NotFound desc = could not find container \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": container with ID starting with dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b not found: ID does not exist" Feb 23 13:07:55.828553 master-0 kubenswrapper[17411]: I0223 13:07:55.828527 17411 scope.go:117] "RemoveContainer" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.828917 master-0 kubenswrapper[17411]: E0223 13:07:55.828857 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": container with ID starting with 6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994 not found: ID does not exist" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.828917 master-0 kubenswrapper[17411]: I0223 13:07:55.828898 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} err="failed to get container status \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": rpc error: code = NotFound desc = could not find container \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": container with ID starting with 6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994 not found: ID does not exist" Feb 23 13:07:55.828917 master-0 kubenswrapper[17411]: I0223 13:07:55.828917 17411 scope.go:117] "RemoveContainer" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.829487 master-0 kubenswrapper[17411]: I0223 13:07:55.829446 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} err="failed to get container status \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": rpc error: code = NotFound desc = could not find container \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": container with ID starting with 1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc not found: ID does not exist" Feb 23 13:07:55.829487 master-0 kubenswrapper[17411]: I0223 13:07:55.829474 17411 scope.go:117] "RemoveContainer" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.829786 master-0 kubenswrapper[17411]: I0223 13:07:55.829758 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} err="failed to get container status \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": rpc error: code = NotFound desc = could not find container \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": container with ID starting with 1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc not found: ID does not exist" Feb 23 13:07:55.829842 master-0 kubenswrapper[17411]: I0223 13:07:55.829785 17411 scope.go:117] "RemoveContainer" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.830211 master-0 kubenswrapper[17411]: I0223 13:07:55.830179 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} err="failed to get container status \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": rpc error: code = NotFound desc = could not find container \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": container with ID starting with dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b not found: ID does not exist" Feb 23 13:07:55.830211 master-0 kubenswrapper[17411]: I0223 13:07:55.830203 17411 scope.go:117] "RemoveContainer" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.830523 master-0 kubenswrapper[17411]: I0223 13:07:55.830490 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} err="failed to get container status \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": rpc error: code = NotFound desc = could not find container \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": container with ID starting with 6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994 not found: ID does not exist" Feb 23 13:07:55.830523 master-0 kubenswrapper[17411]: I0223 13:07:55.830515 17411 scope.go:117] "RemoveContainer" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.830808 master-0 kubenswrapper[17411]: I0223 13:07:55.830780 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} err="failed to get container status \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": rpc error: code = NotFound desc = could not find container \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": container with ID starting with 1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc not found: ID does not exist" Feb 23 13:07:55.830808 master-0 kubenswrapper[17411]: I0223 13:07:55.830801 17411 scope.go:117] "RemoveContainer" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.831088 master-0 kubenswrapper[17411]: I0223 13:07:55.831056 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} err="failed to get container status \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": rpc error: code = NotFound desc = could not find container \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": container with ID starting with 1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc not found: ID does not exist" Feb 23 13:07:55.831088 master-0 kubenswrapper[17411]: I0223 13:07:55.831081 17411 scope.go:117] "RemoveContainer" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.831419 master-0 kubenswrapper[17411]: I0223 13:07:55.831386 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} err="failed to get container status \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": rpc error: code = NotFound desc = could not find container \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": container with ID starting with dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b not found: ID does not exist" Feb 23 13:07:55.831419 master-0 kubenswrapper[17411]: I0223 13:07:55.831409 17411 scope.go:117] "RemoveContainer" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.831652 master-0 kubenswrapper[17411]: I0223 13:07:55.831620 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} err="failed to get container status \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": rpc error: code = NotFound desc = could not find container \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": container with ID starting with 6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994 not found: ID does not exist" Feb 23 13:07:55.831652 master-0 kubenswrapper[17411]: I0223 13:07:55.831644 17411 scope.go:117] "RemoveContainer" containerID="1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc" Feb 23 13:07:55.831928 master-0 kubenswrapper[17411]: I0223 13:07:55.831890 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc"} err="failed to get container status \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": rpc error: code = NotFound desc = could not find container \"1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc\": container with ID starting with 1427ab26e89c91c88f2acb6982fa098ab635a45045a434ddf50a6ee7cc86a3bc not found: ID does not exist" Feb 23 13:07:55.831928 master-0 kubenswrapper[17411]: I0223 13:07:55.831914 17411 scope.go:117] "RemoveContainer" containerID="1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc" Feb 23 13:07:55.833421 master-0 kubenswrapper[17411]: I0223 13:07:55.833384 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc"} err="failed to get container status \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": rpc error: code = NotFound desc = could not find container \"1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc\": container with ID starting with 1fc531d4aaee1c2e1c56ae2227054447cf616cc07ecca10ad4071f903d8489dc not found: ID does not exist" Feb 23 13:07:55.833421 master-0 kubenswrapper[17411]: I0223 13:07:55.833418 17411 scope.go:117] "RemoveContainer" containerID="dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b" Feb 23 13:07:55.833871 master-0 kubenswrapper[17411]: I0223 13:07:55.833786 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b"} err="failed to get container status \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": rpc error: code = NotFound desc = could not find container \"dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b\": container with ID starting with dc7f5cc1180be271a7b73c9d3f857f557d4d77ebc84ddcf962e090e5db28b98b not found: ID does not exist" Feb 23 13:07:55.833871 master-0 kubenswrapper[17411]: I0223 13:07:55.833834 17411 scope.go:117] "RemoveContainer" containerID="6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994" Feb 23 13:07:55.834487 master-0 kubenswrapper[17411]: I0223 13:07:55.834228 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994"} err="failed to get container status \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": rpc error: code = NotFound desc = could not find container \"6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994\": container with ID starting with 6636f37262f47e7fee6fe9c6485df3ad751e4cd02fecfee0d57b59b25fa7f994 not found: ID does not exist" Feb 23 13:07:56.884759 master-0 kubenswrapper[17411]: I0223 13:07:56.884691 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c8e14cb165534672d5ddc06061f8f2" path="/var/lib/kubelet/pods/05c8e14cb165534672d5ddc06061f8f2/volumes" Feb 23 13:07:57.227033 master-0 kubenswrapper[17411]: I0223 13:07:57.226977 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:57.268775 master-0 kubenswrapper[17411]: I0223 13:07:57.268676 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access\") pod \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " Feb 23 13:07:57.269146 master-0 kubenswrapper[17411]: I0223 13:07:57.268868 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock\") pod \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " Feb 23 13:07:57.269146 master-0 kubenswrapper[17411]: I0223 13:07:57.269021 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir\") pod \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\" (UID: \"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b\") " Feb 23 13:07:57.272316 master-0 kubenswrapper[17411]: I0223 13:07:57.269831 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock" (OuterVolumeSpecName: "var-lock") pod "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" (UID: "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:57.272316 master-0 kubenswrapper[17411]: I0223 13:07:57.269886 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" (UID: "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:07:57.284677 master-0 kubenswrapper[17411]: I0223 13:07:57.284584 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" (UID: "93c37e01-20fe-43f0-b014-2aaf7a3c2b8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:07:57.371255 master-0 kubenswrapper[17411]: I0223 13:07:57.371185 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:57.371255 master-0 kubenswrapper[17411]: I0223 13:07:57.371220 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:57.371255 master-0 kubenswrapper[17411]: I0223 13:07:57.371262 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93c37e01-20fe-43f0-b014-2aaf7a3c2b8b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:07:57.772796 master-0 kubenswrapper[17411]: I0223 13:07:57.772736 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"93c37e01-20fe-43f0-b014-2aaf7a3c2b8b","Type":"ContainerDied","Data":"cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea"} Feb 23 13:07:57.772796 master-0 kubenswrapper[17411]: I0223 13:07:57.772796 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf38011e894745f530b3ac62370eaf56db6498406855056a772c5e72657ae7ea" Feb 23 13:07:57.773060 master-0 kubenswrapper[17411]: I0223 13:07:57.772798 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 23 13:07:59.307007 master-0 kubenswrapper[17411]: I0223 13:07:59.304630 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:07:59.307007 master-0 kubenswrapper[17411]: E0223 13:07:59.304950 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:31.304909004 +0000 UTC m=+104.732415601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:08:04.676108 master-0 kubenswrapper[17411]: I0223 13:08:04.676008 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:08:04.676899 master-0 kubenswrapper[17411]: E0223 13:08:04.676421 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:36.676395963 +0000 UTC m=+110.103902590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:08:05.868351 master-0 kubenswrapper[17411]: I0223 13:08:05.868226 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:05.902660 master-0 kubenswrapper[17411]: I0223 13:08:05.902553 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8d0c81df-663a-4ef4-8016-58aef2c7d5cd" Feb 23 13:08:05.902660 master-0 kubenswrapper[17411]: I0223 13:08:05.902602 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8d0c81df-663a-4ef4-8016-58aef2c7d5cd" Feb 23 13:08:05.918365 master-0 kubenswrapper[17411]: I0223 13:08:05.918307 17411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:05.921722 master-0 kubenswrapper[17411]: I0223 13:08:05.921652 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:08:05.923669 master-0 kubenswrapper[17411]: I0223 13:08:05.923614 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:08:05.936161 master-0 kubenswrapper[17411]: I0223 13:08:05.936100 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:05.944141 master-0 kubenswrapper[17411]: I0223 13:08:05.942934 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:08:06.393942 master-0 kubenswrapper[17411]: I0223 13:08:06.393840 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"a6bd5c98100900ff484d9ecc07c3575ef2dfde242a0ba0ee9c6ef45ff1a27bdb"} Feb 23 13:08:06.393942 master-0 kubenswrapper[17411]: I0223 13:08:06.393886 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"93ac8d380846375eb3a978d2f0a3e4d03963a17496bbb3d9d032fb2bdb89ef50"} Feb 23 13:08:07.405837 master-0 kubenswrapper[17411]: I0223 13:08:07.405771 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"7be9444f5b625e402453341f193b326bd7008df65bbec6d9b42b674fec217d14"} Feb 23 13:08:07.405837 master-0 kubenswrapper[17411]: I0223 13:08:07.405835 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"ea9aa7893884286b0f9dd2cc94d3dc00f41c3846f07eae1cc605631dd0fe37bc"} Feb 23 13:08:07.406554 master-0 kubenswrapper[17411]: I0223 13:08:07.405856 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"b398a9f3c00c8a1ed9831c18d667495d4a0f74359778ab7ea6c74a83ae93e1ea"} Feb 23 13:08:08.262268 master-0 kubenswrapper[17411]: I0223 13:08:08.258849 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.258821479 podStartE2EDuration="3.258821479s" podCreationTimestamp="2026-02-23 13:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:08:08.256740711 +0000 UTC m=+81.684247318" watchObservedRunningTime="2026-02-23 13:08:08.258821479 +0000 UTC m=+81.686328076" Feb 23 13:08:08.283269 master-0 kubenswrapper[17411]: I0223 13:08:08.275203 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:08:08.283269 master-0 kubenswrapper[17411]: E0223 13:08:08.275578 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:12.275549671 +0000 UTC m=+145.703056268 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:08:12.176128 master-0 kubenswrapper[17411]: I0223 13:08:12.174383 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:08:12.186306 master-0 kubenswrapper[17411]: I0223 13:08:12.181640 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-755ccb876-g7rtk" Feb 23 13:08:15.350708 master-0 kubenswrapper[17411]: I0223 13:08:15.350632 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:08:15.351183 master-0 kubenswrapper[17411]: E0223 13:08:15.351101 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" containerName="installer" Feb 23 13:08:15.351183 master-0 kubenswrapper[17411]: I0223 13:08:15.351125 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" containerName="installer" Feb 23 13:08:15.351484 master-0 kubenswrapper[17411]: I0223 13:08:15.351456 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c37e01-20fe-43f0-b014-2aaf7a3c2b8b" containerName="installer" Feb 23 13:08:15.352154 master-0 kubenswrapper[17411]: I0223 13:08:15.352126 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:08:15.352442 master-0 kubenswrapper[17411]: I0223 13:08:15.352416 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.352677 master-0 kubenswrapper[17411]: I0223 13:08:15.352589 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver" containerID="cri-o://8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e" gracePeriod=15 Feb 23 13:08:15.352746 master-0 kubenswrapper[17411]: I0223 13:08:15.352616 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f" gracePeriod=15 Feb 23 13:08:15.352836 master-0 kubenswrapper[17411]: I0223 13:08:15.352708 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3" gracePeriod=15 Feb 23 13:08:15.352914 master-0 kubenswrapper[17411]: I0223 13:08:15.352830 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5" gracePeriod=15 Feb 23 13:08:15.353063 master-0 kubenswrapper[17411]: I0223 13:08:15.352729 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4" gracePeriod=15 Feb 23 13:08:15.353746 master-0 kubenswrapper[17411]: I0223 13:08:15.353691 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:08:15.354104 master-0 kubenswrapper[17411]: E0223 13:08:15.354076 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-syncer" Feb 23 13:08:15.354174 master-0 kubenswrapper[17411]: I0223 13:08:15.354104 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-syncer" Feb 23 13:08:15.354174 master-0 kubenswrapper[17411]: E0223 13:08:15.354141 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:08:15.354174 master-0 kubenswrapper[17411]: I0223 13:08:15.354155 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: E0223 13:08:15.354181 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="setup" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: I0223 13:08:15.354194 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="setup" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: E0223 13:08:15.354221 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-check-endpoints" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: I0223 13:08:15.354236 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-check-endpoints" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: E0223 13:08:15.354274 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: I0223 13:08:15.354287 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: E0223 13:08:15.354303 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver" Feb 23 13:08:15.354329 master-0 kubenswrapper[17411]: I0223 13:08:15.354315 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver" Feb 23 13:08:15.354628 master-0 kubenswrapper[17411]: I0223 13:08:15.354542 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-insecure-readyz" Feb 23 13:08:15.354628 master-0 kubenswrapper[17411]: I0223 13:08:15.354574 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver" Feb 23 13:08:15.354628 master-0 kubenswrapper[17411]: I0223 13:08:15.354595 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:08:15.354753 master-0 kubenswrapper[17411]: I0223 13:08:15.354632 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-cert-syncer" Feb 23 13:08:15.354753 master-0 kubenswrapper[17411]: I0223 13:08:15.354652 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="kube-apiserver-check-endpoints" Feb 23 13:08:15.354753 master-0 kubenswrapper[17411]: I0223 13:08:15.354668 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed33f74deb6fdef2cfa169d8db13e51c" containerName="setup" Feb 23 13:08:15.403755 master-0 kubenswrapper[17411]: E0223 13:08:15.403677 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.408652 master-0 kubenswrapper[17411]: I0223 13:08:15.408567 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.408816 master-0 kubenswrapper[17411]: I0223 13:08:15.408762 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.408995 master-0 kubenswrapper[17411]: I0223 13:08:15.408958 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.409045 master-0 kubenswrapper[17411]: I0223 13:08:15.409016 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.409338 master-0 kubenswrapper[17411]: I0223 13:08:15.409281 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.510948 master-0 kubenswrapper[17411]: I0223 13:08:15.510898 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511305 master-0 kubenswrapper[17411]: I0223 13:08:15.511285 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.511434 master-0 kubenswrapper[17411]: I0223 13:08:15.511418 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511560 master-0 kubenswrapper[17411]: I0223 13:08:15.511080 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511620 master-0 kubenswrapper[17411]: I0223 13:08:15.511524 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511750 master-0 kubenswrapper[17411]: I0223 13:08:15.511663 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511832 master-0 kubenswrapper[17411]: I0223 13:08:15.511734 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.511931 master-0 kubenswrapper[17411]: I0223 13:08:15.511848 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.512039 master-0 kubenswrapper[17411]: I0223 13:08:15.512025 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.512159 master-0 kubenswrapper[17411]: I0223 13:08:15.512100 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.512215 master-0 kubenswrapper[17411]: I0223 13:08:15.512120 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.512314 master-0 kubenswrapper[17411]: I0223 13:08:15.512296 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.512468 master-0 kubenswrapper[17411]: I0223 13:08:15.512426 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.614692 master-0 kubenswrapper[17411]: I0223 13:08:15.614495 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.614692 master-0 kubenswrapper[17411]: I0223 13:08:15.614632 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.615053 master-0 kubenswrapper[17411]: I0223 13:08:15.614704 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.615053 master-0 kubenswrapper[17411]: I0223 13:08:15.614749 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.615053 master-0 kubenswrapper[17411]: I0223 13:08:15.614634 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.615053 master-0 kubenswrapper[17411]: I0223 13:08:15.614838 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:15.705110 master-0 kubenswrapper[17411]: I0223 13:08:15.705042 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:15.749929 master-0 kubenswrapper[17411]: W0223 13:08:15.749850 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95806c9442ee27c355bfbf25ba6f70f0.slice/crio-bf00f05bc1ca721a08663f0617a6f45257cf1435bb1bda1ea0a9d87620c1532f WatchSource:0}: Error finding container bf00f05bc1ca721a08663f0617a6f45257cf1435bb1bda1ea0a9d87620c1532f: Status 404 returned error can't find the container with id bf00f05bc1ca721a08663f0617a6f45257cf1435bb1bda1ea0a9d87620c1532f Feb 23 13:08:15.757094 master-0 kubenswrapper[17411]: E0223 13:08:15.756837 17411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e217ea0531c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:95806c9442ee27c355bfbf25ba6f70f0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:08:15.755203017 +0000 UTC m=+89.182709614,LastTimestamp:2026-02-23 13:08:15.755203017 +0000 UTC m=+89.182709614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:08:15.937641 master-0 kubenswrapper[17411]: I0223 13:08:15.937458 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.937915 master-0 kubenswrapper[17411]: I0223 13:08:15.937723 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.937915 master-0 kubenswrapper[17411]: I0223 13:08:15.937772 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.937915 master-0 kubenswrapper[17411]: I0223 13:08:15.937800 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.944425 master-0 kubenswrapper[17411]: I0223 13:08:15.944390 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.944740 master-0 kubenswrapper[17411]: I0223 13:08:15.944706 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:15.945620 master-0 kubenswrapper[17411]: I0223 13:08:15.945560 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:15.946633 master-0 kubenswrapper[17411]: I0223 13:08:15.946583 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:15.947502 master-0 kubenswrapper[17411]: I0223 13:08:15.947439 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:15.948450 master-0 kubenswrapper[17411]: I0223 13:08:15.948392 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.473359 master-0 kubenswrapper[17411]: I0223 13:08:16.473306 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95806c9442ee27c355bfbf25ba6f70f0","Type":"ContainerStarted","Data":"2815ad42dd26968dc87d1128c455ddbb0dab29bbbd4c503e2698056875d2d29a"} Feb 23 13:08:16.474148 master-0 kubenswrapper[17411]: I0223 13:08:16.474007 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95806c9442ee27c355bfbf25ba6f70f0","Type":"ContainerStarted","Data":"bf00f05bc1ca721a08663f0617a6f45257cf1435bb1bda1ea0a9d87620c1532f"} Feb 23 13:08:16.475352 master-0 kubenswrapper[17411]: I0223 13:08:16.475302 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.475717 master-0 kubenswrapper[17411]: E0223 13:08:16.475638 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:08:16.476233 master-0 kubenswrapper[17411]: I0223 13:08:16.476159 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.477189 master-0 kubenswrapper[17411]: I0223 13:08:16.477158 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ed33f74deb6fdef2cfa169d8db13e51c/kube-apiserver-cert-syncer/0.log" Feb 23 13:08:16.478045 master-0 kubenswrapper[17411]: I0223 13:08:16.477997 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f" exitCode=0 Feb 23 13:08:16.478045 master-0 kubenswrapper[17411]: I0223 13:08:16.478031 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5" exitCode=0 Feb 23 13:08:16.478138 master-0 kubenswrapper[17411]: I0223 13:08:16.478048 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3" exitCode=0 Feb 23 13:08:16.478138 master-0 kubenswrapper[17411]: I0223 13:08:16.478061 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4" exitCode=2 Feb 23 13:08:16.480321 master-0 kubenswrapper[17411]: I0223 13:08:16.480282 17411 generic.go:334] "Generic (PLEG): container finished" podID="649c8f56-22ef-4e68-bc9b-9d608fba998c" containerID="0ad530397d7e0906f92bdc82f78dbc6b9a8f87e05a0492ec16d7cc020ef72a12" exitCode=0 Feb 23 13:08:16.480397 master-0 kubenswrapper[17411]: I0223 13:08:16.480331 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"649c8f56-22ef-4e68-bc9b-9d608fba998c","Type":"ContainerDied","Data":"0ad530397d7e0906f92bdc82f78dbc6b9a8f87e05a0492ec16d7cc020ef72a12"} Feb 23 13:08:16.482875 master-0 kubenswrapper[17411]: I0223 13:08:16.482291 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.483030 master-0 kubenswrapper[17411]: I0223 13:08:16.482902 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.483485 master-0 kubenswrapper[17411]: I0223 13:08:16.483439 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.485885 master-0 kubenswrapper[17411]: I0223 13:08:16.485837 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:16.486723 master-0 kubenswrapper[17411]: I0223 13:08:16.486657 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.487101 master-0 kubenswrapper[17411]: I0223 13:08:16.487007 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:08:16.487474 master-0 kubenswrapper[17411]: I0223 13:08:16.487435 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.487939 master-0 kubenswrapper[17411]: I0223 13:08:16.487901 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.489129 master-0 kubenswrapper[17411]: I0223 13:08:16.488378 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.489219 master-0 kubenswrapper[17411]: I0223 13:08:16.489192 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.490337 master-0 kubenswrapper[17411]: I0223 13:08:16.490173 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.873152 master-0 kubenswrapper[17411]: I0223 13:08:16.873025 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.874130 master-0 kubenswrapper[17411]: I0223 13:08:16.874042 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:16.875134 master-0 kubenswrapper[17411]: I0223 13:08:16.875065 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:17.709377 master-0 kubenswrapper[17411]: I0223 13:08:17.709334 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ed33f74deb6fdef2cfa169d8db13e51c/kube-apiserver-cert-syncer/0.log" Feb 23 13:08:17.710349 master-0 kubenswrapper[17411]: I0223 13:08:17.710283 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:17.711790 master-0 kubenswrapper[17411]: I0223 13:08:17.711683 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:17.712717 master-0 kubenswrapper[17411]: I0223 13:08:17.712675 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:17.713596 master-0 kubenswrapper[17411]: I0223 13:08:17.713554 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864002 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") pod \"ed33f74deb6fdef2cfa169d8db13e51c\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864081 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") pod \"ed33f74deb6fdef2cfa169d8db13e51c\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864115 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ed33f74deb6fdef2cfa169d8db13e51c" (UID: "ed33f74deb6fdef2cfa169d8db13e51c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864178 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") pod \"ed33f74deb6fdef2cfa169d8db13e51c\" (UID: \"ed33f74deb6fdef2cfa169d8db13e51c\") " Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864226 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ed33f74deb6fdef2cfa169d8db13e51c" (UID: "ed33f74deb6fdef2cfa169d8db13e51c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:08:17.864282 master-0 kubenswrapper[17411]: I0223 13:08:17.864269 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ed33f74deb6fdef2cfa169d8db13e51c" (UID: "ed33f74deb6fdef2cfa169d8db13e51c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:08:17.865055 master-0 kubenswrapper[17411]: I0223 13:08:17.864759 17411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:17.865055 master-0 kubenswrapper[17411]: I0223 13:08:17.864779 17411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:17.865055 master-0 kubenswrapper[17411]: I0223 13:08:17.864792 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ed33f74deb6fdef2cfa169d8db13e51c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:17.999029 master-0 kubenswrapper[17411]: I0223 13:08:17.998931 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:08:18.000492 master-0 kubenswrapper[17411]: I0223 13:08:18.000432 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.001471 master-0 kubenswrapper[17411]: I0223 13:08:18.001390 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.002358 master-0 kubenswrapper[17411]: I0223 13:08:18.002282 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.069481 master-0 kubenswrapper[17411]: I0223 13:08:18.069380 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir\") pod \"649c8f56-22ef-4e68-bc9b-9d608fba998c\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " Feb 23 13:08:18.069779 master-0 kubenswrapper[17411]: I0223 13:08:18.069509 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "649c8f56-22ef-4e68-bc9b-9d608fba998c" (UID: "649c8f56-22ef-4e68-bc9b-9d608fba998c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:08:18.069779 master-0 kubenswrapper[17411]: I0223 13:08:18.069571 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access\") pod \"649c8f56-22ef-4e68-bc9b-9d608fba998c\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " Feb 23 13:08:18.069779 master-0 kubenswrapper[17411]: I0223 13:08:18.069715 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock\") pod \"649c8f56-22ef-4e68-bc9b-9d608fba998c\" (UID: \"649c8f56-22ef-4e68-bc9b-9d608fba998c\") " Feb 23 13:08:18.070043 master-0 kubenswrapper[17411]: I0223 13:08:18.069904 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock" (OuterVolumeSpecName: "var-lock") pod "649c8f56-22ef-4e68-bc9b-9d608fba998c" (UID: "649c8f56-22ef-4e68-bc9b-9d608fba998c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:08:18.070227 master-0 kubenswrapper[17411]: I0223 13:08:18.070172 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:18.070227 master-0 kubenswrapper[17411]: I0223 13:08:18.070212 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/649c8f56-22ef-4e68-bc9b-9d608fba998c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:18.072960 master-0 kubenswrapper[17411]: I0223 13:08:18.072871 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "649c8f56-22ef-4e68-bc9b-9d608fba998c" (UID: "649c8f56-22ef-4e68-bc9b-9d608fba998c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:18.172504 master-0 kubenswrapper[17411]: I0223 13:08:18.171855 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/649c8f56-22ef-4e68-bc9b-9d608fba998c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:08:18.514450 master-0 kubenswrapper[17411]: I0223 13:08:18.514389 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ed33f74deb6fdef2cfa169d8db13e51c/kube-apiserver-cert-syncer/0.log" Feb 23 13:08:18.515615 master-0 kubenswrapper[17411]: I0223 13:08:18.515367 17411 generic.go:334] "Generic (PLEG): container finished" podID="ed33f74deb6fdef2cfa169d8db13e51c" containerID="8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e" exitCode=0 Feb 23 13:08:18.515615 master-0 kubenswrapper[17411]: I0223 13:08:18.515497 17411 scope.go:117] "RemoveContainer" containerID="75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f" Feb 23 13:08:18.515615 master-0 kubenswrapper[17411]: I0223 13:08:18.515526 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:18.519545 master-0 kubenswrapper[17411]: I0223 13:08:18.519508 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 23 13:08:18.519545 master-0 kubenswrapper[17411]: I0223 13:08:18.519509 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"649c8f56-22ef-4e68-bc9b-9d608fba998c","Type":"ContainerDied","Data":"f1521dc299e825db85f41a3f5ce09ee770285ed9eca4a5f73654268f61fd88f9"} Feb 23 13:08:18.519656 master-0 kubenswrapper[17411]: I0223 13:08:18.519568 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1521dc299e825db85f41a3f5ce09ee770285ed9eca4a5f73654268f61fd88f9" Feb 23 13:08:18.536642 master-0 kubenswrapper[17411]: I0223 13:08:18.536586 17411 scope.go:117] "RemoveContainer" containerID="677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5" Feb 23 13:08:18.555345 master-0 kubenswrapper[17411]: I0223 13:08:18.555048 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.556539 master-0 kubenswrapper[17411]: I0223 13:08:18.556476 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.557895 master-0 kubenswrapper[17411]: I0223 13:08:18.557805 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.558903 master-0 kubenswrapper[17411]: I0223 13:08:18.558813 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.559733 master-0 kubenswrapper[17411]: I0223 13:08:18.559653 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.560492 master-0 kubenswrapper[17411]: I0223 13:08:18.560445 17411 status_manager.go:851] "Failed to get status for pod" podUID="ed33f74deb6fdef2cfa169d8db13e51c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:18.568369 master-0 kubenswrapper[17411]: I0223 13:08:18.568339 17411 scope.go:117] "RemoveContainer" containerID="b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3" Feb 23 13:08:18.591476 master-0 kubenswrapper[17411]: I0223 13:08:18.591313 17411 scope.go:117] "RemoveContainer" containerID="59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4" Feb 23 13:08:18.609415 master-0 kubenswrapper[17411]: I0223 13:08:18.609365 17411 scope.go:117] "RemoveContainer" containerID="8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e" Feb 23 13:08:18.633928 master-0 kubenswrapper[17411]: I0223 13:08:18.633869 17411 scope.go:117] "RemoveContainer" containerID="9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78" Feb 23 13:08:18.660655 master-0 kubenswrapper[17411]: I0223 13:08:18.660392 17411 scope.go:117] "RemoveContainer" containerID="75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f" Feb 23 13:08:18.660919 master-0 kubenswrapper[17411]: E0223 13:08:18.660878 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f\": container with ID starting with 75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f not found: ID does not exist" containerID="75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f" Feb 23 13:08:18.661013 master-0 kubenswrapper[17411]: I0223 13:08:18.660926 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f"} err="failed to get container status \"75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f\": rpc error: code = NotFound desc = could not find container \"75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f\": container with ID starting with 75f9a8ea0e4aa9d7b652a98abcefa31dd08c8196a3081a3eb25f28295ed26a8f not found: ID does not exist" Feb 23 13:08:18.661013 master-0 kubenswrapper[17411]: I0223 13:08:18.660960 17411 scope.go:117] "RemoveContainer" containerID="677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5" Feb 23 13:08:18.661694 master-0 kubenswrapper[17411]: E0223 13:08:18.661615 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5\": container with ID starting with 677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5 not found: ID does not exist" containerID="677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5" Feb 23 13:08:18.661798 master-0 kubenswrapper[17411]: I0223 13:08:18.661689 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5"} err="failed to get container status \"677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5\": rpc error: code = NotFound desc = could not find container \"677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5\": container with ID starting with 677125b0965a3facbbca8cd39f97b17fc6ab3cac15c7ac1f545362d34acab9f5 not found: ID does not exist" Feb 23 13:08:18.661798 master-0 kubenswrapper[17411]: I0223 13:08:18.661743 17411 scope.go:117] "RemoveContainer" containerID="b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3" Feb 23 13:08:18.662235 master-0 kubenswrapper[17411]: E0223 13:08:18.662181 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3\": container with ID starting with b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3 not found: ID does not exist" containerID="b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3" Feb 23 13:08:18.662235 master-0 kubenswrapper[17411]: I0223 13:08:18.662223 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3"} err="failed to get container status \"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3\": rpc error: code = NotFound desc = could not find container \"b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3\": container with ID starting with b5fc9a318c986342d40121df4d0470e9e5511514f899bed601f2fbb97ec2d3d3 not found: ID does not exist" Feb 23 13:08:18.662489 master-0 kubenswrapper[17411]: I0223 13:08:18.662263 17411 scope.go:117] "RemoveContainer" containerID="59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4" Feb 23 13:08:18.662663 master-0 kubenswrapper[17411]: E0223 13:08:18.662605 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4\": container with ID starting with 59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4 not found: ID does not exist" containerID="59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4" Feb 23 13:08:18.662663 master-0 kubenswrapper[17411]: I0223 13:08:18.662646 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4"} err="failed to get container status \"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4\": rpc error: code = NotFound desc = could not find container \"59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4\": container with ID starting with 59292d9da56aa1c731b1c4cc397d35e0898a60d09884fa6aade99d2f993ecca4 not found: ID does not exist" Feb 23 13:08:18.662663 master-0 kubenswrapper[17411]: I0223 13:08:18.662663 17411 scope.go:117] "RemoveContainer" containerID="8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e" Feb 23 13:08:18.663106 master-0 kubenswrapper[17411]: E0223 13:08:18.663069 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e\": container with ID starting with 8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e not found: ID does not exist" containerID="8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e" Feb 23 13:08:18.663216 master-0 kubenswrapper[17411]: I0223 13:08:18.663107 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e"} err="failed to get container status \"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e\": rpc error: code = NotFound desc = could not find container \"8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e\": container with ID starting with 8f15e2c7b7c871eb15dc79138fd33d21918632860651c5a62cf0750061db911e not found: ID does not exist" Feb 23 13:08:18.663216 master-0 kubenswrapper[17411]: I0223 13:08:18.663128 17411 scope.go:117] "RemoveContainer" containerID="9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78" Feb 23 13:08:18.663544 master-0 kubenswrapper[17411]: E0223 13:08:18.663469 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78\": container with ID starting with 9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78 not found: ID does not exist" containerID="9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78" Feb 23 13:08:18.663544 master-0 kubenswrapper[17411]: I0223 13:08:18.663508 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78"} err="failed to get container status \"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78\": rpc error: code = NotFound desc = could not find container \"9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78\": container with ID starting with 9971c933361743191b06bf424b109ce96ea5ea53d45f6c8565e0ccd376fdde78 not found: ID does not exist" Feb 23 13:08:18.879992 master-0 kubenswrapper[17411]: I0223 13:08:18.879853 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed33f74deb6fdef2cfa169d8db13e51c" path="/var/lib/kubelet/pods/ed33f74deb6fdef2cfa169d8db13e51c/volumes" Feb 23 13:08:21.954165 master-0 kubenswrapper[17411]: E0223 13:08:21.954052 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:21.955283 master-0 kubenswrapper[17411]: E0223 13:08:21.955180 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:21.955964 master-0 kubenswrapper[17411]: E0223 13:08:21.955887 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:21.956639 master-0 kubenswrapper[17411]: E0223 13:08:21.956577 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:21.957474 master-0 kubenswrapper[17411]: E0223 13:08:21.957394 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:21.957474 master-0 kubenswrapper[17411]: I0223 13:08:21.957466 17411 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:08:21.958311 master-0 kubenswrapper[17411]: E0223 13:08:21.958216 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 23 13:08:22.159824 master-0 kubenswrapper[17411]: E0223 13:08:22.159669 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 23 13:08:22.561623 master-0 kubenswrapper[17411]: E0223 13:08:22.561531 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 23 13:08:23.361922 master-0 kubenswrapper[17411]: E0223 13:08:23.361679 17411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e217ea0531c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:95806c9442ee27c355bfbf25ba6f70f0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:08:15.755203017 +0000 UTC m=+89.182709614,LastTimestamp:2026-02-23 13:08:15.755203017 +0000 UTC m=+89.182709614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:08:23.363072 master-0 kubenswrapper[17411]: E0223 13:08:23.362682 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 23 13:08:24.964020 master-0 kubenswrapper[17411]: E0223 13:08:24.963939 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 23 13:08:25.867908 master-0 kubenswrapper[17411]: I0223 13:08:25.867835 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:25.869121 master-0 kubenswrapper[17411]: I0223 13:08:25.869035 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:25.869715 master-0 kubenswrapper[17411]: I0223 13:08:25.869665 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:25.887452 master-0 kubenswrapper[17411]: I0223 13:08:25.887424 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:25.887569 master-0 kubenswrapper[17411]: I0223 13:08:25.887558 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:25.888252 master-0 kubenswrapper[17411]: E0223 13:08:25.888198 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:25.888897 master-0 kubenswrapper[17411]: I0223 13:08:25.888860 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:26.595531 master-0 kubenswrapper[17411]: I0223 13:08:26.595463 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="eb83d6db2b81eff670c43e4f30b6b4176f20d325f24bb246edc1393395f0fde8" exitCode=0 Feb 23 13:08:26.596369 master-0 kubenswrapper[17411]: I0223 13:08:26.595561 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerDied","Data":"eb83d6db2b81eff670c43e4f30b6b4176f20d325f24bb246edc1393395f0fde8"} Feb 23 13:08:26.596369 master-0 kubenswrapper[17411]: I0223 13:08:26.595597 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"a040139d21d0180ef578a52fb81eda7846d84abcd55d92db9b7eba58b8d68615"} Feb 23 13:08:26.596369 master-0 kubenswrapper[17411]: I0223 13:08:26.595930 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:26.596369 master-0 kubenswrapper[17411]: I0223 13:08:26.595950 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:26.597136 master-0 kubenswrapper[17411]: E0223 13:08:26.597041 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:26.597136 master-0 kubenswrapper[17411]: I0223 13:08:26.597072 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.597998 master-0 kubenswrapper[17411]: I0223 13:08:26.597921 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.598832 master-0 kubenswrapper[17411]: I0223 13:08:26.598697 17411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792" exitCode=1 Feb 23 13:08:26.598832 master-0 kubenswrapper[17411]: I0223 13:08:26.598734 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792"} Feb 23 13:08:26.598832 master-0 kubenswrapper[17411]: I0223 13:08:26.598772 17411 scope.go:117] "RemoveContainer" containerID="177a00edcfd919e7d221798cd7875143318357f73a98d1f96f1e3d8cf020354d" Feb 23 13:08:26.600398 master-0 kubenswrapper[17411]: I0223 13:08:26.600326 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.601104 master-0 kubenswrapper[17411]: I0223 13:08:26.601058 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.601659 master-0 kubenswrapper[17411]: I0223 13:08:26.601618 17411 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.612195 master-0 kubenswrapper[17411]: I0223 13:08:26.611544 17411 scope.go:117] "RemoveContainer" containerID="fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792" Feb 23 13:08:26.877648 master-0 kubenswrapper[17411]: I0223 13:08:26.877566 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.879013 master-0 kubenswrapper[17411]: I0223 13:08:26.878953 17411 status_manager.go:851] "Failed to get status for pod" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.880531 master-0 kubenswrapper[17411]: I0223 13:08:26.879828 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:26.881238 master-0 kubenswrapper[17411]: I0223 13:08:26.881172 17411 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:08:27.609465 master-0 kubenswrapper[17411]: I0223 13:08:27.609407 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"dc5ce8696fe6f5fe40f802dd027c3d1021d387667d3f9353461a3632d607781a"} Feb 23 13:08:27.610104 master-0 kubenswrapper[17411]: I0223 13:08:27.609493 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"1451bfe95dea492070e81afea279bb401c056a53aa2057f0e288509531e88c91"} Feb 23 13:08:27.612088 master-0 kubenswrapper[17411]: I0223 13:08:27.612037 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7"} Feb 23 13:08:28.632295 master-0 kubenswrapper[17411]: I0223 13:08:28.632192 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"061d7a30e7243aaf925347846dddb4f9e340978170f0d9805e39811eeb5a64eb"} Feb 23 13:08:28.632960 master-0 kubenswrapper[17411]: I0223 13:08:28.632304 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"af37724971496c567478e8ee1bc3c4cea631a17cbc43ca93ff3d0e2473a64b7f"} Feb 23 13:08:28.632960 master-0 kubenswrapper[17411]: I0223 13:08:28.632328 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"219fe31af98ac0a70bf5c99e980eff392eafdb712a96f15192f2e77ddadeb718"} Feb 23 13:08:28.632960 master-0 kubenswrapper[17411]: I0223 13:08:28.632476 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:28.632960 master-0 kubenswrapper[17411]: I0223 13:08:28.632671 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:28.632960 master-0 kubenswrapper[17411]: I0223 13:08:28.632711 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:30.889206 master-0 kubenswrapper[17411]: I0223 13:08:30.889072 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:30.889206 master-0 kubenswrapper[17411]: I0223 13:08:30.889142 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:30.894154 master-0 kubenswrapper[17411]: I0223 13:08:30.894109 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:31.399806 master-0 kubenswrapper[17411]: I0223 13:08:31.399729 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:08:31.400096 master-0 kubenswrapper[17411]: E0223 13:08:31.399948 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:35.399919772 +0000 UTC m=+168.827426369 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:08:33.787461 master-0 kubenswrapper[17411]: I0223 13:08:33.787402 17411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:33.861574 master-0 kubenswrapper[17411]: I0223 13:08:33.861499 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="888e23114cf20f3bf6573c5f7b88d7d0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:08:34.678195 master-0 kubenswrapper[17411]: I0223 13:08:34.678103 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:34.678195 master-0 kubenswrapper[17411]: I0223 13:08:34.678161 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:34.682978 master-0 kubenswrapper[17411]: I0223 13:08:34.682895 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="888e23114cf20f3bf6573c5f7b88d7d0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:08:36.686043 master-0 kubenswrapper[17411]: I0223 13:08:36.685924 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:08:36.687263 master-0 kubenswrapper[17411]: E0223 13:08:36.686897 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:40.686858076 +0000 UTC m=+174.114364713 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:08:40.222581 master-0 kubenswrapper[17411]: I0223 13:08:40.222512 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:08:40.291148 master-0 kubenswrapper[17411]: I0223 13:08:40.291039 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:08:40.787519 master-0 kubenswrapper[17411]: I0223 13:08:40.787412 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 23 13:08:40.905193 master-0 kubenswrapper[17411]: I0223 13:08:40.905122 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:08:41.404104 master-0 kubenswrapper[17411]: I0223 13:08:41.404037 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:08:41.501548 master-0 kubenswrapper[17411]: I0223 13:08:41.501440 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:08:41.713233 master-0 kubenswrapper[17411]: I0223 13:08:41.713155 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:08:41.794601 master-0 kubenswrapper[17411]: I0223 13:08:41.794523 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:08:41.796462 master-0 kubenswrapper[17411]: I0223 13:08:41.796423 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:08:41.814550 master-0 kubenswrapper[17411]: I0223 13:08:41.814462 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:08:41.844571 master-0 kubenswrapper[17411]: I0223 13:08:41.844481 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:08:42.018117 master-0 kubenswrapper[17411]: I0223 13:08:42.017957 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:08:42.041628 master-0 kubenswrapper[17411]: I0223 13:08:42.041561 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:08:42.088322 master-0 kubenswrapper[17411]: I0223 13:08:42.088248 17411 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:42.275847 master-0 kubenswrapper[17411]: I0223 13:08:42.275701 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:08:42.565350 master-0 kubenswrapper[17411]: I0223 13:08:42.565156 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:08:42.615689 master-0 kubenswrapper[17411]: I0223 13:08:42.615634 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:08:42.677903 master-0 kubenswrapper[17411]: I0223 13:08:42.677855 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:08:42.736080 master-0 kubenswrapper[17411]: I0223 13:08:42.736024 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:08:42.741495 master-0 kubenswrapper[17411]: I0223 13:08:42.741446 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:08:43.322500 master-0 kubenswrapper[17411]: I0223 13:08:43.322422 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:08:43.353660 master-0 kubenswrapper[17411]: I0223 13:08:43.353597 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:08:43.545365 master-0 kubenswrapper[17411]: I0223 13:08:43.545279 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:08:43.624511 master-0 kubenswrapper[17411]: I0223 13:08:43.624374 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:08:43.667709 master-0 kubenswrapper[17411]: I0223 13:08:43.667640 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wt8dr" Feb 23 13:08:43.851106 master-0 kubenswrapper[17411]: I0223 13:08:43.850982 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:08:43.939796 master-0 kubenswrapper[17411]: I0223 13:08:43.939398 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:08:44.296341 master-0 kubenswrapper[17411]: I0223 13:08:44.296136 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:08:44.296601 master-0 kubenswrapper[17411]: I0223 13:08:44.296532 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:08:44.357900 master-0 kubenswrapper[17411]: I0223 13:08:44.357791 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:08:44.389929 master-0 kubenswrapper[17411]: I0223 13:08:44.389782 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:08:44.856142 master-0 kubenswrapper[17411]: I0223 13:08:44.856046 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:08:44.865008 master-0 kubenswrapper[17411]: I0223 13:08:44.864931 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:08:45.053596 master-0 kubenswrapper[17411]: I0223 13:08:45.053523 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:08:45.055798 master-0 kubenswrapper[17411]: I0223 13:08:45.055740 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:08:45.104686 master-0 kubenswrapper[17411]: I0223 13:08:45.104627 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:08:45.147197 master-0 kubenswrapper[17411]: I0223 13:08:45.147051 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 23 13:08:45.275769 master-0 kubenswrapper[17411]: I0223 13:08:45.275692 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:08:45.288932 master-0 kubenswrapper[17411]: I0223 13:08:45.288854 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 23 13:08:45.437033 master-0 kubenswrapper[17411]: I0223 13:08:45.436902 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:08:45.679392 master-0 kubenswrapper[17411]: I0223 13:08:45.679067 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:08:45.913672 master-0 kubenswrapper[17411]: I0223 13:08:45.913590 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:08:45.945121 master-0 kubenswrapper[17411]: I0223 13:08:45.944992 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:08:45.966659 master-0 kubenswrapper[17411]: I0223 13:08:45.966576 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:08:46.095044 master-0 kubenswrapper[17411]: I0223 13:08:46.094912 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:08:46.229023 master-0 kubenswrapper[17411]: I0223 13:08:46.228969 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:08:46.274173 master-0 kubenswrapper[17411]: I0223 13:08:46.274103 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:08:46.306113 master-0 kubenswrapper[17411]: I0223 13:08:46.306025 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r" Feb 23 13:08:46.405367 master-0 kubenswrapper[17411]: I0223 13:08:46.405292 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:08:46.457127 master-0 kubenswrapper[17411]: I0223 13:08:46.457016 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:08:46.501706 master-0 kubenswrapper[17411]: I0223 13:08:46.501572 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:08:47.050435 master-0 kubenswrapper[17411]: I0223 13:08:47.050372 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:08:47.264676 master-0 kubenswrapper[17411]: I0223 13:08:47.264565 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:08:47.367401 master-0 kubenswrapper[17411]: I0223 13:08:47.367211 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:08:47.374089 master-0 kubenswrapper[17411]: I0223 13:08:47.374041 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:08:47.377118 master-0 kubenswrapper[17411]: I0223 13:08:47.377063 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:08:47.520629 master-0 kubenswrapper[17411]: I0223 13:08:47.520546 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:08:47.531064 master-0 kubenswrapper[17411]: I0223 13:08:47.530976 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:08:47.617333 master-0 kubenswrapper[17411]: I0223 13:08:47.617285 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:08:47.654728 master-0 kubenswrapper[17411]: I0223 13:08:47.654533 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:08:47.743757 master-0 kubenswrapper[17411]: I0223 13:08:47.743366 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:08:47.747542 master-0 kubenswrapper[17411]: I0223 13:08:47.747485 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:08:47.833596 master-0 kubenswrapper[17411]: I0223 13:08:47.833486 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:08:48.235968 master-0 kubenswrapper[17411]: I0223 13:08:48.235883 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:08:48.237886 master-0 kubenswrapper[17411]: I0223 13:08:48.237830 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:08:48.271962 master-0 kubenswrapper[17411]: I0223 13:08:48.271879 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-n9dxs" Feb 23 13:08:48.407415 master-0 kubenswrapper[17411]: I0223 13:08:48.407345 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:08:48.417954 master-0 kubenswrapper[17411]: I0223 13:08:48.417900 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:08:48.421393 master-0 kubenswrapper[17411]: I0223 13:08:48.421348 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:08:48.656602 master-0 kubenswrapper[17411]: I0223 13:08:48.656435 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:08:48.699959 master-0 kubenswrapper[17411]: I0223 13:08:48.699897 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:08:48.808347 master-0 kubenswrapper[17411]: I0223 13:08:48.808283 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:08:48.815599 master-0 kubenswrapper[17411]: I0223 13:08:48.815514 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:08:48.850180 master-0 kubenswrapper[17411]: I0223 13:08:48.850115 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-twm6g" Feb 23 13:08:48.867666 master-0 kubenswrapper[17411]: I0223 13:08:48.867612 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:08:48.983323 master-0 kubenswrapper[17411]: I0223 13:08:48.983223 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lp4jk" Feb 23 13:08:49.115266 master-0 kubenswrapper[17411]: I0223 13:08:49.115200 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:08:49.137826 master-0 kubenswrapper[17411]: I0223 13:08:49.137769 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:08:49.168144 master-0 kubenswrapper[17411]: I0223 13:08:49.165627 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-5499c" Feb 23 13:08:49.250994 master-0 kubenswrapper[17411]: I0223 13:08:49.250856 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:08:49.279504 master-0 kubenswrapper[17411]: I0223 13:08:49.279414 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:08:49.293151 master-0 kubenswrapper[17411]: I0223 13:08:49.293086 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:08:49.294893 master-0 kubenswrapper[17411]: I0223 13:08:49.294854 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:08:49.347302 master-0 kubenswrapper[17411]: I0223 13:08:49.344544 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:08:49.374729 master-0 kubenswrapper[17411]: I0223 13:08:49.374691 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:08:49.670431 master-0 kubenswrapper[17411]: I0223 13:08:49.670221 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:08:49.678960 master-0 kubenswrapper[17411]: I0223 13:08:49.678920 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:08:49.689092 master-0 kubenswrapper[17411]: I0223 13:08:49.689057 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:08:49.819837 master-0 kubenswrapper[17411]: I0223 13:08:49.819778 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:08:50.009273 master-0 kubenswrapper[17411]: I0223 13:08:50.009177 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:08:50.040262 master-0 kubenswrapper[17411]: I0223 13:08:50.040200 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:08:50.116441 master-0 kubenswrapper[17411]: I0223 13:08:50.116379 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:08:50.150195 master-0 kubenswrapper[17411]: I0223 13:08:50.150102 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:08:50.218529 master-0 kubenswrapper[17411]: I0223 13:08:50.218458 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9whd7" Feb 23 13:08:50.307665 master-0 kubenswrapper[17411]: I0223 13:08:50.307465 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:08:50.337182 master-0 kubenswrapper[17411]: I0223 13:08:50.337134 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:08:50.383860 master-0 kubenswrapper[17411]: I0223 13:08:50.383691 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:08:50.467669 master-0 kubenswrapper[17411]: I0223 13:08:50.467563 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:08:50.482747 master-0 kubenswrapper[17411]: I0223 13:08:50.482680 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:08:50.664326 master-0 kubenswrapper[17411]: I0223 13:08:50.664088 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:08:50.771408 master-0 kubenswrapper[17411]: I0223 13:08:50.771326 17411 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:50.811014 master-0 kubenswrapper[17411]: I0223 13:08:50.810944 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:08:50.866938 master-0 kubenswrapper[17411]: I0223 13:08:50.866854 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:08:50.947876 master-0 kubenswrapper[17411]: I0223 13:08:50.947819 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:08:50.982225 master-0 kubenswrapper[17411]: I0223 13:08:50.982149 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:08:51.149840 master-0 kubenswrapper[17411]: I0223 13:08:51.149757 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:08:51.210331 master-0 kubenswrapper[17411]: I0223 13:08:51.210181 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:08:51.213008 master-0 kubenswrapper[17411]: I0223 13:08:51.212982 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:08:51.240034 master-0 kubenswrapper[17411]: I0223 13:08:51.239945 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:08:51.254705 master-0 kubenswrapper[17411]: I0223 13:08:51.254629 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:08:51.277105 master-0 kubenswrapper[17411]: I0223 13:08:51.277059 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-zj94f" Feb 23 13:08:51.304052 master-0 kubenswrapper[17411]: I0223 13:08:51.303994 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:08:51.360812 master-0 kubenswrapper[17411]: I0223 13:08:51.360756 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:08:51.487897 master-0 kubenswrapper[17411]: I0223 13:08:51.487768 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:08:51.517857 master-0 kubenswrapper[17411]: I0223 13:08:51.517805 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:08:51.530896 master-0 kubenswrapper[17411]: I0223 13:08:51.530822 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:08:51.626282 master-0 kubenswrapper[17411]: I0223 13:08:51.625762 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:08:51.658306 master-0 kubenswrapper[17411]: I0223 13:08:51.653604 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:08:51.706052 master-0 kubenswrapper[17411]: I0223 13:08:51.706003 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:08:51.768566 master-0 kubenswrapper[17411]: I0223 13:08:51.768382 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-977zq" Feb 23 13:08:51.983560 master-0 kubenswrapper[17411]: I0223 13:08:51.983480 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:08:52.032823 master-0 kubenswrapper[17411]: I0223 13:08:52.032673 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:08:52.156358 master-0 kubenswrapper[17411]: I0223 13:08:52.156275 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:08:52.184270 master-0 kubenswrapper[17411]: I0223 13:08:52.184159 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:08:52.188044 master-0 kubenswrapper[17411]: I0223 13:08:52.187923 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:08:52.196639 master-0 kubenswrapper[17411]: I0223 13:08:52.196585 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:08:52.307284 master-0 kubenswrapper[17411]: I0223 13:08:52.307062 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:08:52.391520 master-0 kubenswrapper[17411]: I0223 13:08:52.391457 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qdkmb" Feb 23 13:08:52.426406 master-0 kubenswrapper[17411]: I0223 13:08:52.420457 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:08:52.436754 master-0 kubenswrapper[17411]: I0223 13:08:52.436672 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:08:52.454758 master-0 kubenswrapper[17411]: I0223 13:08:52.454693 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:08:52.474118 master-0 kubenswrapper[17411]: I0223 13:08:52.474052 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:08:52.476104 master-0 kubenswrapper[17411]: I0223 13:08:52.476058 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:08:52.513409 master-0 kubenswrapper[17411]: I0223 13:08:52.513338 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:08:52.570495 master-0 kubenswrapper[17411]: I0223 13:08:52.570375 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:08:52.587950 master-0 kubenswrapper[17411]: I0223 13:08:52.587911 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:08:52.655831 master-0 kubenswrapper[17411]: I0223 13:08:52.655754 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:08:52.692720 master-0 kubenswrapper[17411]: I0223 13:08:52.692618 17411 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:08:52.704187 master-0 kubenswrapper[17411]: I0223 13:08:52.704088 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:08:52.704417 master-0 kubenswrapper[17411]: I0223 13:08:52.704238 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:08:52.705154 master-0 kubenswrapper[17411]: I0223 13:08:52.705066 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:52.705154 master-0 kubenswrapper[17411]: I0223 13:08:52.705142 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ea3f0260-2af7-42e4-826b-edb7d49cdb9b" Feb 23 13:08:52.741686 master-0 kubenswrapper[17411]: I0223 13:08:52.741577 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=19.741549407 podStartE2EDuration="19.741549407s" podCreationTimestamp="2026-02-23 13:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:08:52.736770172 +0000 UTC m=+126.164276779" watchObservedRunningTime="2026-02-23 13:08:52.741549407 +0000 UTC m=+126.169056024" Feb 23 13:08:52.819286 master-0 kubenswrapper[17411]: I0223 13:08:52.819219 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:52.820276 master-0 kubenswrapper[17411]: I0223 13:08:52.820216 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:08:52.916574 master-0 kubenswrapper[17411]: I0223 13:08:52.916458 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:08:52.922340 master-0 kubenswrapper[17411]: I0223 13:08:52.922204 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:08:53.053386 master-0 kubenswrapper[17411]: I0223 13:08:53.053316 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 23 13:08:53.057969 master-0 kubenswrapper[17411]: I0223 13:08:53.057904 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:08:53.066593 master-0 kubenswrapper[17411]: I0223 13:08:53.066555 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:08:53.068709 master-0 kubenswrapper[17411]: I0223 13:08:53.068685 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:08:53.125981 master-0 kubenswrapper[17411]: I0223 13:08:53.125899 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:08:53.146898 master-0 kubenswrapper[17411]: I0223 13:08:53.146800 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:08:53.228447 master-0 kubenswrapper[17411]: I0223 13:08:53.228377 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 23 13:08:53.235431 master-0 kubenswrapper[17411]: I0223 13:08:53.235389 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:08:53.243780 master-0 kubenswrapper[17411]: I0223 13:08:53.243736 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:08:53.315915 master-0 kubenswrapper[17411]: I0223 13:08:53.315855 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:08:53.362871 master-0 kubenswrapper[17411]: I0223 13:08:53.362793 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:08:53.379373 master-0 kubenswrapper[17411]: I0223 13:08:53.379311 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:08:53.435260 master-0 kubenswrapper[17411]: I0223 13:08:53.435189 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:08:53.519314 master-0 kubenswrapper[17411]: I0223 13:08:53.519179 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:08:53.585670 master-0 kubenswrapper[17411]: I0223 13:08:53.585607 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:08:53.601372 master-0 kubenswrapper[17411]: I0223 13:08:53.600627 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:08:53.809455 master-0 kubenswrapper[17411]: I0223 13:08:53.809284 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:08:53.825467 master-0 kubenswrapper[17411]: I0223 13:08:53.825400 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:08:53.907527 master-0 kubenswrapper[17411]: I0223 13:08:53.907461 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:08:54.001459 master-0 kubenswrapper[17411]: I0223 13:08:54.001401 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:08:54.098026 master-0 kubenswrapper[17411]: I0223 13:08:54.097843 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:08:54.099952 master-0 kubenswrapper[17411]: I0223 13:08:54.099912 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:08:54.109505 master-0 kubenswrapper[17411]: I0223 13:08:54.109459 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-dldvx" Feb 23 13:08:54.121165 master-0 kubenswrapper[17411]: I0223 13:08:54.121147 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-wbd45" Feb 23 13:08:54.220644 master-0 kubenswrapper[17411]: I0223 13:08:54.220574 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:08:54.259438 master-0 kubenswrapper[17411]: I0223 13:08:54.259368 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:08:54.321517 master-0 kubenswrapper[17411]: I0223 13:08:54.321449 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:08:54.342213 master-0 kubenswrapper[17411]: I0223 13:08:54.342127 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:08:54.511436 master-0 kubenswrapper[17411]: I0223 13:08:54.511377 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:08:54.667111 master-0 kubenswrapper[17411]: I0223 13:08:54.667066 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:08:54.671474 master-0 kubenswrapper[17411]: I0223 13:08:54.671377 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:08:54.690747 master-0 kubenswrapper[17411]: I0223 13:08:54.690677 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:08:54.969012 master-0 kubenswrapper[17411]: I0223 13:08:54.968971 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:08:54.981209 master-0 kubenswrapper[17411]: I0223 13:08:54.981153 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:08:55.298579 master-0 kubenswrapper[17411]: I0223 13:08:55.298063 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:08:55.307919 master-0 kubenswrapper[17411]: I0223 13:08:55.307725 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:08:55.357450 master-0 kubenswrapper[17411]: I0223 13:08:55.357165 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:08:55.357450 master-0 kubenswrapper[17411]: I0223 13:08:55.357370 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:08:55.368910 master-0 kubenswrapper[17411]: I0223 13:08:55.368838 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:08:55.404750 master-0 kubenswrapper[17411]: I0223 13:08:55.404671 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:08:55.426030 master-0 kubenswrapper[17411]: I0223 13:08:55.425760 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:08:55.457487 master-0 kubenswrapper[17411]: I0223 13:08:55.457432 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:08:55.518524 master-0 kubenswrapper[17411]: I0223 13:08:55.516511 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:08:55.526600 master-0 kubenswrapper[17411]: I0223 13:08:55.526556 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:08:55.569995 master-0 kubenswrapper[17411]: I0223 13:08:55.569920 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:08:55.573181 master-0 kubenswrapper[17411]: I0223 13:08:55.573110 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:08:55.600201 master-0 kubenswrapper[17411]: I0223 13:08:55.600120 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:08:55.610994 master-0 kubenswrapper[17411]: I0223 13:08:55.610849 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:08:55.612719 master-0 kubenswrapper[17411]: I0223 13:08:55.612544 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:08:55.647282 master-0 kubenswrapper[17411]: I0223 13:08:55.647198 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-qhbh8" Feb 23 13:08:55.736495 master-0 kubenswrapper[17411]: I0223 13:08:55.736418 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:08:55.780862 master-0 kubenswrapper[17411]: I0223 13:08:55.780819 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:08:55.789185 master-0 kubenswrapper[17411]: I0223 13:08:55.789164 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:08:55.818402 master-0 kubenswrapper[17411]: I0223 13:08:55.817746 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:08:55.842524 master-0 kubenswrapper[17411]: I0223 13:08:55.836467 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-vhrrg" Feb 23 13:08:55.865630 master-0 kubenswrapper[17411]: I0223 13:08:55.865527 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zmw9t" Feb 23 13:08:55.871091 master-0 kubenswrapper[17411]: I0223 13:08:55.871050 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:08:55.961699 master-0 kubenswrapper[17411]: I0223 13:08:55.961609 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:08:55.983135 master-0 kubenswrapper[17411]: I0223 13:08:55.983064 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:08:56.052892 master-0 kubenswrapper[17411]: I0223 13:08:56.052814 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:08:56.053524 master-0 kubenswrapper[17411]: I0223 13:08:56.053484 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:08:56.062639 master-0 kubenswrapper[17411]: I0223 13:08:56.062604 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-f5gf8" Feb 23 13:08:56.088074 master-0 kubenswrapper[17411]: I0223 13:08:56.087914 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:08:56.147152 master-0 kubenswrapper[17411]: I0223 13:08:56.147056 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9jkd0a8djrqaf" Feb 23 13:08:56.282473 master-0 kubenswrapper[17411]: I0223 13:08:56.282393 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:08:56.286036 master-0 kubenswrapper[17411]: I0223 13:08:56.285998 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:08:56.333002 master-0 kubenswrapper[17411]: I0223 13:08:56.332919 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2628k" Feb 23 13:08:56.394851 master-0 kubenswrapper[17411]: I0223 13:08:56.394693 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:08:56.414004 master-0 kubenswrapper[17411]: I0223 13:08:56.413685 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:08:56.414230 master-0 kubenswrapper[17411]: I0223 13:08:56.413721 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:08:56.501335 master-0 kubenswrapper[17411]: I0223 13:08:56.501065 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:08:56.501335 master-0 kubenswrapper[17411]: I0223 13:08:56.501322 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" containerID="cri-o://2815ad42dd26968dc87d1128c455ddbb0dab29bbbd4c503e2698056875d2d29a" gracePeriod=5 Feb 23 13:08:56.507550 master-0 kubenswrapper[17411]: I0223 13:08:56.507496 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:08:56.577168 master-0 kubenswrapper[17411]: I0223 13:08:56.576768 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:08:56.578738 master-0 kubenswrapper[17411]: I0223 13:08:56.578698 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:08:56.700021 master-0 kubenswrapper[17411]: I0223 13:08:56.699968 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8odpr3ab0635p" Feb 23 13:08:56.748149 master-0 kubenswrapper[17411]: I0223 13:08:56.748069 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:08:56.762482 master-0 kubenswrapper[17411]: I0223 13:08:56.762428 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:08:56.808473 master-0 kubenswrapper[17411]: I0223 13:08:56.808420 17411 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:56.864275 master-0 kubenswrapper[17411]: I0223 13:08:56.864186 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7q6an9sqsfn51" Feb 23 13:08:56.958389 master-0 kubenswrapper[17411]: I0223 13:08:56.958287 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-4q8qn" Feb 23 13:08:57.004526 master-0 kubenswrapper[17411]: I0223 13:08:57.004470 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:08:57.053878 master-0 kubenswrapper[17411]: I0223 13:08:57.053815 17411 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:57.085585 master-0 kubenswrapper[17411]: I0223 13:08:57.085505 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:08:57.139648 master-0 kubenswrapper[17411]: I0223 13:08:57.139571 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:08:57.166990 master-0 kubenswrapper[17411]: I0223 13:08:57.166872 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:08:57.230785 master-0 kubenswrapper[17411]: I0223 13:08:57.230684 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-zmzm6" Feb 23 13:08:57.230961 master-0 kubenswrapper[17411]: I0223 13:08:57.230914 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:08:57.263742 master-0 kubenswrapper[17411]: I0223 13:08:57.263690 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:08:57.321794 master-0 kubenswrapper[17411]: I0223 13:08:57.321728 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:08:57.327095 master-0 kubenswrapper[17411]: I0223 13:08:57.327041 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-z5ckf" Feb 23 13:08:57.362827 master-0 kubenswrapper[17411]: I0223 13:08:57.362758 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:08:57.439050 master-0 kubenswrapper[17411]: I0223 13:08:57.438978 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:08:57.493411 master-0 kubenswrapper[17411]: I0223 13:08:57.485456 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:08:57.519472 master-0 kubenswrapper[17411]: I0223 13:08:57.519424 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:08:57.536234 master-0 kubenswrapper[17411]: I0223 13:08:57.536181 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:08:57.573947 master-0 kubenswrapper[17411]: I0223 13:08:57.573889 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:08:57.582721 master-0 kubenswrapper[17411]: I0223 13:08:57.582684 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:08:57.624243 master-0 kubenswrapper[17411]: I0223 13:08:57.624187 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:08:57.739210 master-0 kubenswrapper[17411]: I0223 13:08:57.739141 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:08:57.857019 master-0 kubenswrapper[17411]: I0223 13:08:57.856851 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:08:57.911452 master-0 kubenswrapper[17411]: I0223 13:08:57.911193 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:08:57.967312 master-0 kubenswrapper[17411]: I0223 13:08:57.967229 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:08:58.002264 master-0 kubenswrapper[17411]: I0223 13:08:58.002181 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:08:58.112686 master-0 kubenswrapper[17411]: I0223 13:08:58.112532 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:08:58.271186 master-0 kubenswrapper[17411]: I0223 13:08:58.271106 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:08:58.279181 master-0 kubenswrapper[17411]: I0223 13:08:58.279118 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:08:58.282419 master-0 kubenswrapper[17411]: I0223 13:08:58.282378 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:08:58.312809 master-0 kubenswrapper[17411]: I0223 13:08:58.312759 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:08:58.378991 master-0 kubenswrapper[17411]: I0223 13:08:58.378860 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:08:58.691651 master-0 kubenswrapper[17411]: I0223 13:08:58.691606 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:08:58.748729 master-0 kubenswrapper[17411]: I0223 13:08:58.748646 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4dmq5" Feb 23 13:08:58.805268 master-0 kubenswrapper[17411]: I0223 13:08:58.805159 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-h78lq" Feb 23 13:08:58.854217 master-0 kubenswrapper[17411]: I0223 13:08:58.854149 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-n8vwz" Feb 23 13:08:59.028821 master-0 kubenswrapper[17411]: I0223 13:08:59.028649 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:08:59.210214 master-0 kubenswrapper[17411]: I0223 13:08:59.210118 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:08:59.306222 master-0 kubenswrapper[17411]: I0223 13:08:59.306032 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:08:59.368193 master-0 kubenswrapper[17411]: I0223 13:08:59.368132 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:08:59.477755 master-0 kubenswrapper[17411]: I0223 13:08:59.477682 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:08:59.624183 master-0 kubenswrapper[17411]: I0223 13:08:59.624067 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:08:59.656027 master-0 kubenswrapper[17411]: I0223 13:08:59.655746 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:08:59.737145 master-0 kubenswrapper[17411]: I0223 13:08:59.736968 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:08:59.761653 master-0 kubenswrapper[17411]: I0223 13:08:59.757294 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:08:59.840065 master-0 kubenswrapper[17411]: I0223 13:08:59.840026 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-8ns2k" Feb 23 13:08:59.889694 master-0 kubenswrapper[17411]: I0223 13:08:59.889569 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pzqs9" Feb 23 13:08:59.903031 master-0 kubenswrapper[17411]: I0223 13:08:59.903002 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:08:59.941058 master-0 kubenswrapper[17411]: I0223 13:08:59.941002 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:08:59.982861 master-0 kubenswrapper[17411]: I0223 13:08:59.982804 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:09:00.094966 master-0 kubenswrapper[17411]: I0223 13:09:00.094892 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:09:00.220362 master-0 kubenswrapper[17411]: I0223 13:09:00.220301 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 23 13:09:00.298047 master-0 kubenswrapper[17411]: I0223 13:09:00.297945 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:09:00.361416 master-0 kubenswrapper[17411]: I0223 13:09:00.361356 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:09:00.619366 master-0 kubenswrapper[17411]: I0223 13:09:00.619191 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxjzf" Feb 23 13:09:00.768093 master-0 kubenswrapper[17411]: I0223 13:09:00.768026 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjsc8" Feb 23 13:09:00.770062 master-0 kubenswrapper[17411]: I0223 13:09:00.770031 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:09:01.011313 master-0 kubenswrapper[17411]: I0223 13:09:01.011233 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:09:01.366298 master-0 kubenswrapper[17411]: I0223 13:09:01.366105 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:09:01.394680 master-0 kubenswrapper[17411]: I0223 13:09:01.394628 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:09:01.492397 master-0 kubenswrapper[17411]: I0223 13:09:01.492327 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:09:01.729899 master-0 kubenswrapper[17411]: I0223 13:09:01.729819 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:09:01.891160 master-0 kubenswrapper[17411]: I0223 13:09:01.891063 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95806c9442ee27c355bfbf25ba6f70f0/startup-monitor/0.log" Feb 23 13:09:01.891160 master-0 kubenswrapper[17411]: I0223 13:09:01.891161 17411 generic.go:334] "Generic (PLEG): container finished" podID="95806c9442ee27c355bfbf25ba6f70f0" containerID="2815ad42dd26968dc87d1128c455ddbb0dab29bbbd4c503e2698056875d2d29a" exitCode=137 Feb 23 13:09:02.054224 master-0 kubenswrapper[17411]: I0223 13:09:02.054190 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:09:02.080408 master-0 kubenswrapper[17411]: I0223 13:09:02.080355 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95806c9442ee27c355bfbf25ba6f70f0/startup-monitor/0.log" Feb 23 13:09:02.080636 master-0 kubenswrapper[17411]: I0223 13:09:02.080499 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:09:02.116723 master-0 kubenswrapper[17411]: I0223 13:09:02.116676 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:09:02.219612 master-0 kubenswrapper[17411]: I0223 13:09:02.219568 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 23 13:09:02.220023 master-0 kubenswrapper[17411]: I0223 13:09:02.220000 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 23 13:09:02.220155 master-0 kubenswrapper[17411]: I0223 13:09:02.220141 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 23 13:09:02.220290 master-0 kubenswrapper[17411]: I0223 13:09:02.220274 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 23 13:09:02.220411 master-0 kubenswrapper[17411]: I0223 13:09:02.220394 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 23 13:09:02.220509 master-0 kubenswrapper[17411]: I0223 13:09:02.220142 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests" (OuterVolumeSpecName: "manifests") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:09:02.220555 master-0 kubenswrapper[17411]: I0223 13:09:02.220279 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:09:02.220555 master-0 kubenswrapper[17411]: I0223 13:09:02.220361 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock" (OuterVolumeSpecName: "var-lock") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:09:02.220639 master-0 kubenswrapper[17411]: I0223 13:09:02.220475 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log" (OuterVolumeSpecName: "var-log") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:09:02.221037 master-0 kubenswrapper[17411]: I0223 13:09:02.221021 17411 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") on node \"master-0\" DevicePath \"\"" Feb 23 13:09:02.221209 master-0 kubenswrapper[17411]: I0223 13:09:02.221194 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:09:02.221318 master-0 kubenswrapper[17411]: I0223 13:09:02.221295 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:09:02.221395 master-0 kubenswrapper[17411]: I0223 13:09:02.221384 17411 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:09:02.221465 master-0 kubenswrapper[17411]: I0223 13:09:02.221422 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:09:02.224785 master-0 kubenswrapper[17411]: I0223 13:09:02.224734 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:09:02.323060 master-0 kubenswrapper[17411]: I0223 13:09:02.322909 17411 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:09:02.352780 master-0 kubenswrapper[17411]: I0223 13:09:02.352694 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:09:02.500231 master-0 kubenswrapper[17411]: I0223 13:09:02.500170 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:09:02.582619 master-0 kubenswrapper[17411]: I0223 13:09:02.582486 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:09:02.877638 master-0 kubenswrapper[17411]: I0223 13:09:02.877444 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95806c9442ee27c355bfbf25ba6f70f0" path="/var/lib/kubelet/pods/95806c9442ee27c355bfbf25ba6f70f0/volumes" Feb 23 13:09:02.901037 master-0 kubenswrapper[17411]: I0223 13:09:02.900950 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95806c9442ee27c355bfbf25ba6f70f0/startup-monitor/0.log" Feb 23 13:09:02.901877 master-0 kubenswrapper[17411]: I0223 13:09:02.901083 17411 scope.go:117] "RemoveContainer" containerID="2815ad42dd26968dc87d1128c455ddbb0dab29bbbd4c503e2698056875d2d29a" Feb 23 13:09:02.901877 master-0 kubenswrapper[17411]: I0223 13:09:02.901205 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:09:03.497106 master-0 kubenswrapper[17411]: I0223 13:09:03.497050 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:09:04.307427 master-0 kubenswrapper[17411]: I0223 13:09:04.307367 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:09:07.299816 master-0 kubenswrapper[17411]: E0223 13:09:07.299732 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" Feb 23 13:09:07.938875 master-0 kubenswrapper[17411]: I0223 13:09:07.938794 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:09:12.291629 master-0 kubenswrapper[17411]: I0223 13:09:12.291552 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:09:12.292915 master-0 kubenswrapper[17411]: E0223 13:09:12.291798 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.29176008 +0000 UTC m=+267.719266707 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:09:30.395663 master-0 kubenswrapper[17411]: E0223 13:09:30.395569 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" Feb 23 13:09:31.103067 master-0 kubenswrapper[17411]: I0223 13:09:31.102976 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:09:35.498536 master-0 kubenswrapper[17411]: I0223 13:09:35.498443 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:09:35.499411 master-0 kubenswrapper[17411]: E0223 13:09:35.498778 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle podName:b0e437b4-e6fd-482f-91a2-f48b9f087321 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:37.498745042 +0000 UTC m=+290.926251659 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:09:35.768757 master-0 kubenswrapper[17411]: E0223 13:09:35.768523 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" Feb 23 13:09:36.146572 master-0 kubenswrapper[17411]: I0223 13:09:36.146477 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:09:40.779700 master-0 kubenswrapper[17411]: I0223 13:09:40.779609 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:09:40.780523 master-0 kubenswrapper[17411]: E0223 13:09:40.779840 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle podName:c229faa3-6eb1-42d6-8e10-f4cadc952d17 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:42.779819537 +0000 UTC m=+296.207326144 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:10:31.583854 master-0 kubenswrapper[17411]: I0223 13:10:31.583784 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d495fcf8-w9576"] Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: E0223 13:10:31.584144 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" containerName="installer" Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: I0223 13:10:31.584161 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" containerName="installer" Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: E0223 13:10:31.584185 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: I0223 13:10:31.584195 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: I0223 13:10:31.584368 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 23 13:10:31.584499 master-0 kubenswrapper[17411]: I0223 13:10:31.584401 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="649c8f56-22ef-4e68-bc9b-9d608fba998c" containerName="installer" Feb 23 13:10:31.584962 master-0 kubenswrapper[17411]: I0223 13:10:31.584929 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.588467 master-0 kubenswrapper[17411]: I0223 13:10:31.588427 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:10:31.588751 master-0 kubenswrapper[17411]: I0223 13:10:31.588724 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:10:31.588899 master-0 kubenswrapper[17411]: I0223 13:10:31.588841 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:10:31.588950 master-0 kubenswrapper[17411]: I0223 13:10:31.588924 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:10:31.589398 master-0 kubenswrapper[17411]: I0223 13:10:31.589369 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:10:31.589747 master-0 kubenswrapper[17411]: I0223 13:10:31.589701 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:10:31.590981 master-0 kubenswrapper[17411]: I0223 13:10:31.590952 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-kcb76" Feb 23 13:10:31.591197 master-0 kubenswrapper[17411]: I0223 13:10:31.591175 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:10:31.591538 master-0 kubenswrapper[17411]: I0223 13:10:31.591503 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:10:31.591802 master-0 kubenswrapper[17411]: I0223 13:10:31.591774 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:10:31.592038 master-0 kubenswrapper[17411]: I0223 13:10:31.592012 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:10:31.593836 master-0 kubenswrapper[17411]: I0223 13:10:31.593805 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:10:31.611271 master-0 kubenswrapper[17411]: I0223 13:10:31.604694 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:10:31.611271 master-0 kubenswrapper[17411]: I0223 13:10:31.607601 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:10:31.611271 master-0 kubenswrapper[17411]: I0223 13:10:31.608901 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d495fcf8-w9576"] Feb 23 13:10:31.715482 master-0 kubenswrapper[17411]: I0223 13:10:31.715387 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.715753 master-0 kubenswrapper[17411]: I0223 13:10:31.715510 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.715753 master-0 kubenswrapper[17411]: I0223 13:10:31.715580 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.715753 master-0 kubenswrapper[17411]: I0223 13:10:31.715617 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.715753 master-0 kubenswrapper[17411]: I0223 13:10:31.715664 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.715753 master-0 kubenswrapper[17411]: I0223 13:10:31.715741 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716067 master-0 kubenswrapper[17411]: I0223 13:10:31.715800 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhs4\" (UniqueName: \"kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716067 master-0 kubenswrapper[17411]: I0223 13:10:31.715840 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716067 master-0 kubenswrapper[17411]: I0223 13:10:31.715882 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716067 master-0 kubenswrapper[17411]: I0223 13:10:31.715955 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716067 master-0 kubenswrapper[17411]: I0223 13:10:31.715998 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716461 master-0 kubenswrapper[17411]: I0223 13:10:31.716074 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.716461 master-0 kubenswrapper[17411]: I0223 13:10:31.716159 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818036 master-0 kubenswrapper[17411]: I0223 13:10:31.817961 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818036 master-0 kubenswrapper[17411]: I0223 13:10:31.818022 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818036 master-0 kubenswrapper[17411]: I0223 13:10:31.818047 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818081 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818131 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818173 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818209 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818227 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818263 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818311 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818377 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhs4\" (UniqueName: \"kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818408 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: I0223 13:10:31.818481 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.818594 master-0 kubenswrapper[17411]: E0223 13:10:31.818587 17411 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 23 13:10:31.819669 master-0 kubenswrapper[17411]: E0223 13:10:31.818702 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:32.318662522 +0000 UTC m=+225.746169159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : secret "v4-0-config-system-session" not found Feb 23 13:10:31.819669 master-0 kubenswrapper[17411]: I0223 13:10:31.819124 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.819669 master-0 kubenswrapper[17411]: E0223 13:10:31.819133 17411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:31.819669 master-0 kubenswrapper[17411]: E0223 13:10:31.819412 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:32.319374041 +0000 UTC m=+225.746880668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:31.821362 master-0 kubenswrapper[17411]: I0223 13:10:31.820108 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.821362 master-0 kubenswrapper[17411]: I0223 13:10:31.820594 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.821627 master-0 kubenswrapper[17411]: I0223 13:10:31.821467 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.824155 master-0 kubenswrapper[17411]: I0223 13:10:31.823287 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.824155 master-0 kubenswrapper[17411]: I0223 13:10:31.823426 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.824155 master-0 kubenswrapper[17411]: I0223 13:10:31.823815 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.824155 master-0 kubenswrapper[17411]: I0223 13:10:31.823860 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.825944 master-0 kubenswrapper[17411]: I0223 13:10:31.825887 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.832126 master-0 kubenswrapper[17411]: I0223 13:10:31.832061 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:31.837160 master-0 kubenswrapper[17411]: I0223 13:10:31.837029 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhs4\" (UniqueName: \"kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:32.328278 master-0 kubenswrapper[17411]: I0223 13:10:32.328186 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:32.328531 master-0 kubenswrapper[17411]: I0223 13:10:32.328360 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:32.328531 master-0 kubenswrapper[17411]: E0223 13:10:32.328457 17411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:32.328648 master-0 kubenswrapper[17411]: E0223 13:10:32.328605 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:33.328560869 +0000 UTC m=+226.756067506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:32.328770 master-0 kubenswrapper[17411]: E0223 13:10:32.328723 17411 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 23 13:10:32.328839 master-0 kubenswrapper[17411]: E0223 13:10:32.328827 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:33.328797235 +0000 UTC m=+226.756303862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : secret "v4-0-config-system-session" not found Feb 23 13:10:33.349173 master-0 kubenswrapper[17411]: I0223 13:10:33.349075 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:33.349937 master-0 kubenswrapper[17411]: E0223 13:10:33.349363 17411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:33.349937 master-0 kubenswrapper[17411]: E0223 13:10:33.349497 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:35.34945992 +0000 UTC m=+228.776966557 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:33.349937 master-0 kubenswrapper[17411]: I0223 13:10:33.349549 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:33.355683 master-0 kubenswrapper[17411]: I0223 13:10:33.355603 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:35.382770 master-0 kubenswrapper[17411]: I0223 13:10:35.382687 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:35.383533 master-0 kubenswrapper[17411]: E0223 13:10:35.382953 17411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:35.383533 master-0 kubenswrapper[17411]: E0223 13:10:35.383117 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig podName:91641690-255e-4c8d-ae63-ad4ad07284b6 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:39.383082017 +0000 UTC m=+232.810588824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig") pod "oauth-openshift-d495fcf8-w9576" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6") : configmap "v4-0-config-system-cliconfig" not found Feb 23 13:10:38.012862 master-0 kubenswrapper[17411]: I0223 13:10:38.012770 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d495fcf8-w9576"] Feb 23 13:10:38.013755 master-0 kubenswrapper[17411]: E0223 13:10:38.013544 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" podUID="91641690-255e-4c8d-ae63-ad4ad07284b6" Feb 23 13:10:38.628828 master-0 kubenswrapper[17411]: I0223 13:10:38.628744 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:38.642490 master-0 kubenswrapper[17411]: I0223 13:10:38.642433 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:38.746483 master-0 kubenswrapper[17411]: I0223 13:10:38.746387 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.746774 master-0 kubenswrapper[17411]: I0223 13:10:38.746633 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.746774 master-0 kubenswrapper[17411]: I0223 13:10:38.746697 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.746990 master-0 kubenswrapper[17411]: I0223 13:10:38.746936 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747089 master-0 kubenswrapper[17411]: I0223 13:10:38.747041 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747158 master-0 kubenswrapper[17411]: I0223 13:10:38.747087 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747158 master-0 kubenswrapper[17411]: I0223 13:10:38.747150 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747401 master-0 kubenswrapper[17411]: I0223 13:10:38.747220 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747401 master-0 kubenswrapper[17411]: I0223 13:10:38.747280 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747401 master-0 kubenswrapper[17411]: I0223 13:10:38.747312 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747714 master-0 kubenswrapper[17411]: I0223 13:10:38.747648 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747828 master-0 kubenswrapper[17411]: I0223 13:10:38.747719 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhs4\" (UniqueName: \"kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:38.747936 master-0 kubenswrapper[17411]: I0223 13:10:38.747815 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:10:38.748133 master-0 kubenswrapper[17411]: I0223 13:10:38.748075 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:10:38.748388 master-0 kubenswrapper[17411]: I0223 13:10:38.748342 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.748388 master-0 kubenswrapper[17411]: I0223 13:10:38.748370 17411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.751361 master-0 kubenswrapper[17411]: I0223 13:10:38.751239 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:10:38.752097 master-0 kubenswrapper[17411]: I0223 13:10:38.752006 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.752238 master-0 kubenswrapper[17411]: I0223 13:10:38.752186 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.752907 master-0 kubenswrapper[17411]: I0223 13:10:38.752838 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:10:38.753733 master-0 kubenswrapper[17411]: I0223 13:10:38.753675 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.754641 master-0 kubenswrapper[17411]: I0223 13:10:38.754589 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.754799 master-0 kubenswrapper[17411]: I0223 13:10:38.754633 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.756441 master-0 kubenswrapper[17411]: I0223 13:10:38.756346 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.757038 master-0 kubenswrapper[17411]: I0223 13:10:38.756967 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4" (OuterVolumeSpecName: "kube-api-access-2lhs4") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "kube-api-access-2lhs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:10:38.761498 master-0 kubenswrapper[17411]: I0223 13:10:38.761417 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:10:38.849957 master-0 kubenswrapper[17411]: I0223 13:10:38.849877 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.849957 master-0 kubenswrapper[17411]: I0223 13:10:38.849940 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.849957 master-0 kubenswrapper[17411]: I0223 13:10:38.849964 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.849988 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850216 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhs4\" (UniqueName: \"kubernetes.io/projected/91641690-255e-4c8d-ae63-ad4ad07284b6-kube-api-access-2lhs4\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850235 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850279 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850298 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850316 17411 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:38.850509 master-0 kubenswrapper[17411]: I0223 13:10:38.850334 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:39.459490 master-0 kubenswrapper[17411]: I0223 13:10:39.459397 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:39.460377 master-0 kubenswrapper[17411]: I0223 13:10:39.460299 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d495fcf8-w9576\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:39.561063 master-0 kubenswrapper[17411]: I0223 13:10:39.560945 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") pod \"91641690-255e-4c8d-ae63-ad4ad07284b6\" (UID: \"91641690-255e-4c8d-ae63-ad4ad07284b6\") " Feb 23 13:10:39.561831 master-0 kubenswrapper[17411]: I0223 13:10:39.561738 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "91641690-255e-4c8d-ae63-ad4ad07284b6" (UID: "91641690-255e-4c8d-ae63-ad4ad07284b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:10:39.637073 master-0 kubenswrapper[17411]: I0223 13:10:39.636981 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d495fcf8-w9576" Feb 23 13:10:39.663842 master-0 kubenswrapper[17411]: I0223 13:10:39.663758 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91641690-255e-4c8d-ae63-ad4ad07284b6-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 23 13:10:39.689872 master-0 kubenswrapper[17411]: I0223 13:10:39.689810 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:10:39.691487 master-0 kubenswrapper[17411]: I0223 13:10:39.691451 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.695597 master-0 kubenswrapper[17411]: I0223 13:10:39.695549 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:10:39.696593 master-0 kubenswrapper[17411]: I0223 13:10:39.696569 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:10:39.697567 master-0 kubenswrapper[17411]: I0223 13:10:39.697527 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d495fcf8-w9576"] Feb 23 13:10:39.697681 master-0 kubenswrapper[17411]: I0223 13:10:39.697568 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:10:39.697681 master-0 kubenswrapper[17411]: I0223 13:10:39.697649 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:10:39.697803 master-0 kubenswrapper[17411]: I0223 13:10:39.697693 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:10:39.697803 master-0 kubenswrapper[17411]: I0223 13:10:39.697739 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:10:39.698171 master-0 kubenswrapper[17411]: I0223 13:10:39.698134 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:10:39.698285 master-0 kubenswrapper[17411]: I0223 13:10:39.698262 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:10:39.698745 master-0 kubenswrapper[17411]: I0223 13:10:39.698712 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:10:39.699153 master-0 kubenswrapper[17411]: I0223 13:10:39.699109 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-kcb76" Feb 23 13:10:39.699320 master-0 kubenswrapper[17411]: I0223 13:10:39.699274 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:10:39.702342 master-0 kubenswrapper[17411]: I0223 13:10:39.700810 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:10:39.704294 master-0 kubenswrapper[17411]: I0223 13:10:39.704257 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-d495fcf8-w9576"] Feb 23 13:10:39.709111 master-0 kubenswrapper[17411]: I0223 13:10:39.708689 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:10:39.709968 master-0 kubenswrapper[17411]: I0223 13:10:39.709878 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:10:39.726537 master-0 kubenswrapper[17411]: I0223 13:10:39.726139 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:10:39.765568 master-0 kubenswrapper[17411]: I0223 13:10:39.765525 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.765908 master-0 kubenswrapper[17411]: I0223 13:10:39.765886 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766061 master-0 kubenswrapper[17411]: I0223 13:10:39.766041 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766203 master-0 kubenswrapper[17411]: I0223 13:10:39.766182 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zkl\" (UniqueName: \"kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766364 master-0 kubenswrapper[17411]: I0223 13:10:39.766346 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766581 master-0 kubenswrapper[17411]: I0223 13:10:39.766561 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766709 master-0 kubenswrapper[17411]: I0223 13:10:39.766691 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.766864 master-0 kubenswrapper[17411]: I0223 13:10:39.766829 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.767004 master-0 kubenswrapper[17411]: I0223 13:10:39.766987 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.767120 master-0 kubenswrapper[17411]: I0223 13:10:39.767101 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.767279 master-0 kubenswrapper[17411]: I0223 13:10:39.767254 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.767405 master-0 kubenswrapper[17411]: I0223 13:10:39.767390 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.767581 master-0 kubenswrapper[17411]: I0223 13:10:39.767557 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.869816 master-0 kubenswrapper[17411]: I0223 13:10:39.869781 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.870078 master-0 kubenswrapper[17411]: I0223 13:10:39.870063 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.871298 master-0 kubenswrapper[17411]: I0223 13:10:39.871207 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.871925 master-0 kubenswrapper[17411]: I0223 13:10:39.871553 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872074 master-0 kubenswrapper[17411]: I0223 13:10:39.872032 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872172 master-0 kubenswrapper[17411]: I0223 13:10:39.872159 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872424 master-0 kubenswrapper[17411]: I0223 13:10:39.872406 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872527 master-0 kubenswrapper[17411]: I0223 13:10:39.872513 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872751 master-0 kubenswrapper[17411]: I0223 13:10:39.872731 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.872875 master-0 kubenswrapper[17411]: I0223 13:10:39.872859 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.873130 master-0 kubenswrapper[17411]: I0223 13:10:39.873114 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.873237 master-0 kubenswrapper[17411]: I0223 13:10:39.873224 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.873386 master-0 kubenswrapper[17411]: I0223 13:10:39.873365 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zkl\" (UniqueName: \"kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.873489 master-0 kubenswrapper[17411]: I0223 13:10:39.873474 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.874077 master-0 kubenswrapper[17411]: I0223 13:10:39.874059 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.874184 master-0 kubenswrapper[17411]: I0223 13:10:39.874127 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.874762 master-0 kubenswrapper[17411]: I0223 13:10:39.874728 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.875554 master-0 kubenswrapper[17411]: I0223 13:10:39.875366 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.875554 master-0 kubenswrapper[17411]: I0223 13:10:39.875514 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.875670 master-0 kubenswrapper[17411]: I0223 13:10:39.875580 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.875843 master-0 kubenswrapper[17411]: I0223 13:10:39.875814 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.876273 master-0 kubenswrapper[17411]: I0223 13:10:39.876236 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.876385 master-0 kubenswrapper[17411]: I0223 13:10:39.876230 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.878849 master-0 kubenswrapper[17411]: I0223 13:10:39.878053 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.878849 master-0 kubenswrapper[17411]: I0223 13:10:39.878396 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:39.891581 master-0 kubenswrapper[17411]: I0223 13:10:39.891541 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zkl\" (UniqueName: \"kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl\") pod \"oauth-openshift-6d4766ffb-ff98d\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:40.033306 master-0 kubenswrapper[17411]: I0223 13:10:40.032846 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:40.490604 master-0 kubenswrapper[17411]: I0223 13:10:40.490505 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:10:40.497435 master-0 kubenswrapper[17411]: W0223 13:10:40.497393 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod240a114d_1fb4_4787_a56d_820006dd7888.slice/crio-de5d07253d09cb464857bea6c2cd82cbeba1dcd3d21233f2bf6179403ca8acf2 WatchSource:0}: Error finding container de5d07253d09cb464857bea6c2cd82cbeba1dcd3d21233f2bf6179403ca8acf2: Status 404 returned error can't find the container with id de5d07253d09cb464857bea6c2cd82cbeba1dcd3d21233f2bf6179403ca8acf2 Feb 23 13:10:40.652984 master-0 kubenswrapper[17411]: I0223 13:10:40.650224 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" event={"ID":"240a114d-1fb4-4787-a56d-820006dd7888","Type":"ContainerStarted","Data":"de5d07253d09cb464857bea6c2cd82cbeba1dcd3d21233f2bf6179403ca8acf2"} Feb 23 13:10:40.884193 master-0 kubenswrapper[17411]: I0223 13:10:40.884006 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91641690-255e-4c8d-ae63-ad4ad07284b6" path="/var/lib/kubelet/pods/91641690-255e-4c8d-ae63-ad4ad07284b6/volumes" Feb 23 13:10:43.685949 master-0 kubenswrapper[17411]: I0223 13:10:43.685863 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" event={"ID":"240a114d-1fb4-4787-a56d-820006dd7888","Type":"ContainerStarted","Data":"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50"} Feb 23 13:10:43.687112 master-0 kubenswrapper[17411]: I0223 13:10:43.687047 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:43.697679 master-0 kubenswrapper[17411]: I0223 13:10:43.697621 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:10:43.716658 master-0 kubenswrapper[17411]: I0223 13:10:43.716480 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" podStartSLOduration=3.835057665 podStartE2EDuration="5.715832853s" podCreationTimestamp="2026-02-23 13:10:38 +0000 UTC" firstStartedPulling="2026-02-23 13:10:40.500786165 +0000 UTC m=+233.928292762" lastFinishedPulling="2026-02-23 13:10:42.381561353 +0000 UTC m=+235.809067950" observedRunningTime="2026-02-23 13:10:43.712497672 +0000 UTC m=+237.140004359" watchObservedRunningTime="2026-02-23 13:10:43.715832853 +0000 UTC m=+237.143339490" Feb 23 13:10:47.103366 master-0 kubenswrapper[17411]: I0223 13:10:47.103304 17411 scope.go:117] "RemoveContainer" containerID="dfd86a94ccff1eeb13e1ddaabeeeb38c3d4bc54e7d5689b425d76ab48acf7562" Feb 23 13:10:52.835339 master-0 kubenswrapper[17411]: I0223 13:10:52.835228 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:10:52.836340 master-0 kubenswrapper[17411]: I0223 13:10:52.836312 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:52.839366 master-0 kubenswrapper[17411]: I0223 13:10:52.839035 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 23 13:10:52.839746 master-0 kubenswrapper[17411]: I0223 13:10:52.839630 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-228ws" Feb 23 13:10:52.856468 master-0 kubenswrapper[17411]: I0223 13:10:52.856322 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:10:52.914328 master-0 kubenswrapper[17411]: I0223 13:10:52.914182 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:52.914632 master-0 kubenswrapper[17411]: I0223 13:10:52.914476 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:52.914632 master-0 kubenswrapper[17411]: I0223 13:10:52.914528 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.017173 master-0 kubenswrapper[17411]: I0223 13:10:53.017096 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.017379 master-0 kubenswrapper[17411]: I0223 13:10:53.017206 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.017379 master-0 kubenswrapper[17411]: I0223 13:10:53.017334 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.017954 master-0 kubenswrapper[17411]: I0223 13:10:53.017873 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.018068 master-0 kubenswrapper[17411]: I0223 13:10:53.018029 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.041954 master-0 kubenswrapper[17411]: I0223 13:10:53.041911 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.173675 master-0 kubenswrapper[17411]: I0223 13:10:53.173504 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:10:53.644882 master-0 kubenswrapper[17411]: I0223 13:10:53.644819 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 23 13:10:53.770920 master-0 kubenswrapper[17411]: I0223 13:10:53.770844 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"9e0e0f7e-b725-4aae-8180-024b699386d5","Type":"ContainerStarted","Data":"eb368b6194084cec835b21f0719a65815da0a901d09f69f4ed986212e9cb21cd"} Feb 23 13:10:54.780771 master-0 kubenswrapper[17411]: I0223 13:10:54.780696 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"9e0e0f7e-b725-4aae-8180-024b699386d5","Type":"ContainerStarted","Data":"9c8b9fbf1cbf9e1003e0b8ccc584b33fb92b0bc5724aa5fc538574be059a308e"} Feb 23 13:10:54.810922 master-0 kubenswrapper[17411]: I0223 13:10:54.810769 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.810736645 podStartE2EDuration="2.810736645s" podCreationTimestamp="2026-02-23 13:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:54.805673106 +0000 UTC m=+248.233179783" watchObservedRunningTime="2026-02-23 13:10:54.810736645 +0000 UTC m=+248.238243282" Feb 23 13:11:02.213219 master-0 kubenswrapper[17411]: I0223 13:11:02.213147 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:11:02.215223 master-0 kubenswrapper[17411]: I0223 13:11:02.215199 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.224041 master-0 kubenswrapper[17411]: I0223 13:11:02.223976 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-q2chk" Feb 23 13:11:02.224288 master-0 kubenswrapper[17411]: I0223 13:11:02.224228 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:11:02.225985 master-0 kubenswrapper[17411]: I0223 13:11:02.225397 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.226220 master-0 kubenswrapper[17411]: I0223 13:11:02.226147 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.226538 master-0 kubenswrapper[17411]: I0223 13:11:02.226491 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.275155 master-0 kubenswrapper[17411]: I0223 13:11:02.233403 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:11:02.327909 master-0 kubenswrapper[17411]: I0223 13:11:02.327812 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.328268 master-0 kubenswrapper[17411]: I0223 13:11:02.327988 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.328268 master-0 kubenswrapper[17411]: I0223 13:11:02.328012 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.328425 master-0 kubenswrapper[17411]: I0223 13:11:02.328388 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.328632 master-0 kubenswrapper[17411]: I0223 13:11:02.328546 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.345581 master-0 kubenswrapper[17411]: I0223 13:11:02.345521 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access\") pod \"installer-3-master-0\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:02.593809 master-0 kubenswrapper[17411]: I0223 13:11:02.592613 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:03.024936 master-0 kubenswrapper[17411]: I0223 13:11:03.024856 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:11:03.031155 master-0 kubenswrapper[17411]: W0223 13:11:03.031060 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod72fb1770_7d0c_4c92_9f0b_3139f27510ca.slice/crio-6e09d64343b7a79a552f30e89f9078abd341790160d84669d82dc2e30bb6a2cc WatchSource:0}: Error finding container 6e09d64343b7a79a552f30e89f9078abd341790160d84669d82dc2e30bb6a2cc: Status 404 returned error can't find the container with id 6e09d64343b7a79a552f30e89f9078abd341790160d84669d82dc2e30bb6a2cc Feb 23 13:11:03.855848 master-0 kubenswrapper[17411]: I0223 13:11:03.855776 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"72fb1770-7d0c-4c92-9f0b-3139f27510ca","Type":"ContainerStarted","Data":"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455"} Feb 23 13:11:03.855848 master-0 kubenswrapper[17411]: I0223 13:11:03.855843 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"72fb1770-7d0c-4c92-9f0b-3139f27510ca","Type":"ContainerStarted","Data":"6e09d64343b7a79a552f30e89f9078abd341790160d84669d82dc2e30bb6a2cc"} Feb 23 13:11:03.880228 master-0 kubenswrapper[17411]: I0223 13:11:03.880123 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.880098309 podStartE2EDuration="1.880098309s" podCreationTimestamp="2026-02-23 13:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:03.873806637 +0000 UTC m=+257.301313244" watchObservedRunningTime="2026-02-23 13:11:03.880098309 +0000 UTC m=+257.307604906" Feb 23 13:11:10.941125 master-0 kubenswrapper[17411]: E0223 13:11:10.941032 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" Feb 23 13:11:11.919286 master-0 kubenswrapper[17411]: I0223 13:11:11.919197 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:11:12.302812 master-0 kubenswrapper[17411]: I0223 13:11:12.302696 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w868k"] Feb 23 13:11:12.304369 master-0 kubenswrapper[17411]: I0223 13:11:12.304330 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.307181 master-0 kubenswrapper[17411]: I0223 13:11:12.307128 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-45sq4" Feb 23 13:11:12.307355 master-0 kubenswrapper[17411]: I0223 13:11:12.307215 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 23 13:11:12.375496 master-0 kubenswrapper[17411]: I0223 13:11:12.375445 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vg9x\" (UniqueName: \"kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.375759 master-0 kubenswrapper[17411]: I0223 13:11:12.375511 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.375759 master-0 kubenswrapper[17411]: I0223 13:11:12.375577 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.375962 master-0 kubenswrapper[17411]: I0223 13:11:12.375893 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.476904 master-0 kubenswrapper[17411]: I0223 13:11:12.476826 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.477212 master-0 kubenswrapper[17411]: I0223 13:11:12.476926 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.477212 master-0 kubenswrapper[17411]: I0223 13:11:12.476965 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vg9x\" (UniqueName: \"kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.477212 master-0 kubenswrapper[17411]: I0223 13:11:12.476993 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.477212 master-0 kubenswrapper[17411]: I0223 13:11:12.476960 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.477630 master-0 kubenswrapper[17411]: I0223 13:11:12.477327 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.478059 master-0 kubenswrapper[17411]: I0223 13:11:12.477981 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.493799 master-0 kubenswrapper[17411]: I0223 13:11:12.493757 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vg9x\" (UniqueName: \"kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x\") pod \"cni-sysctl-allowlist-ds-w868k\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.623652 master-0 kubenswrapper[17411]: I0223 13:11:12.623511 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:12.663560 master-0 kubenswrapper[17411]: W0223 13:11:12.663490 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3516f78_36c2_4b5e_a265_96eb305235f9.slice/crio-1706af823399c2b9d2e90794a42ff0a2dda345f3465a9035e61145d8a55b0675 WatchSource:0}: Error finding container 1706af823399c2b9d2e90794a42ff0a2dda345f3465a9035e61145d8a55b0675: Status 404 returned error can't find the container with id 1706af823399c2b9d2e90794a42ff0a2dda345f3465a9035e61145d8a55b0675 Feb 23 13:11:12.926885 master-0 kubenswrapper[17411]: I0223 13:11:12.926835 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" event={"ID":"e3516f78-36c2-4b5e-a265-96eb305235f9","Type":"ContainerStarted","Data":"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944"} Feb 23 13:11:12.927029 master-0 kubenswrapper[17411]: I0223 13:11:12.926891 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" event={"ID":"e3516f78-36c2-4b5e-a265-96eb305235f9","Type":"ContainerStarted","Data":"1706af823399c2b9d2e90794a42ff0a2dda345f3465a9035e61145d8a55b0675"} Feb 23 13:11:13.936430 master-0 kubenswrapper[17411]: I0223 13:11:13.936352 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:13.958753 master-0 kubenswrapper[17411]: I0223 13:11:13.958698 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:13.989725 master-0 kubenswrapper[17411]: I0223 13:11:13.989611 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" podStartSLOduration=1.989585173 podStartE2EDuration="1.989585173s" podCreationTimestamp="2026-02-23 13:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:13.986401196 +0000 UTC m=+267.413907813" watchObservedRunningTime="2026-02-23 13:11:13.989585173 +0000 UTC m=+267.417091780" Feb 23 13:11:14.294585 master-0 kubenswrapper[17411]: I0223 13:11:14.294525 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w868k"] Feb 23 13:11:14.313073 master-0 kubenswrapper[17411]: I0223 13:11:14.312981 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:11:14.313332 master-0 kubenswrapper[17411]: E0223 13:11:14.313279 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca podName:679fabb5-a261-402e-b5be-8fe7f0da0ec8 nodeName:}" failed. No retries permitted until 2026-02-23 13:13:16.313225494 +0000 UTC m=+389.740732201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca") pod "console-operator-5df5ffc47c-zwmzz" (UID: "679fabb5-a261-402e-b5be-8fe7f0da0ec8") : configmap references non-existent config key: ca-bundle.crt Feb 23 13:11:15.948679 master-0 kubenswrapper[17411]: I0223 13:11:15.948595 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" gracePeriod=30 Feb 23 13:11:16.844381 master-0 kubenswrapper[17411]: I0223 13:11:16.844302 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:11:16.844874 master-0 kubenswrapper[17411]: I0223 13:11:16.844615 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" containerName="installer" containerID="cri-o://af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455" gracePeriod=30 Feb 23 13:11:20.009028 master-0 kubenswrapper[17411]: I0223 13:11:20.008962 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:11:20.009947 master-0 kubenswrapper[17411]: I0223 13:11:20.009916 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.063264 master-0 kubenswrapper[17411]: I0223 13:11:20.063166 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:11:20.113061 master-0 kubenswrapper[17411]: I0223 13:11:20.112963 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.113315 master-0 kubenswrapper[17411]: I0223 13:11:20.113178 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.113315 master-0 kubenswrapper[17411]: I0223 13:11:20.113276 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.215134 master-0 kubenswrapper[17411]: I0223 13:11:20.215025 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.215421 master-0 kubenswrapper[17411]: I0223 13:11:20.215226 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.215474 master-0 kubenswrapper[17411]: I0223 13:11:20.215442 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.215704 master-0 kubenswrapper[17411]: I0223 13:11:20.215661 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.215871 master-0 kubenswrapper[17411]: I0223 13:11:20.215815 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.249343 master-0 kubenswrapper[17411]: I0223 13:11:20.249290 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access\") pod \"installer-4-master-0\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.329064 master-0 kubenswrapper[17411]: I0223 13:11:20.328901 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:11:20.803285 master-0 kubenswrapper[17411]: I0223 13:11:20.801492 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 23 13:11:20.812098 master-0 kubenswrapper[17411]: W0223 13:11:20.812004 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod382f96d2_f66c_4adc_9b6d_4ed63124da89.slice/crio-948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae WatchSource:0}: Error finding container 948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae: Status 404 returned error can't find the container with id 948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae Feb 23 13:11:20.987389 master-0 kubenswrapper[17411]: I0223 13:11:20.987315 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"382f96d2-f66c-4adc-9b6d-4ed63124da89","Type":"ContainerStarted","Data":"948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae"} Feb 23 13:11:21.998896 master-0 kubenswrapper[17411]: I0223 13:11:21.998815 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"382f96d2-f66c-4adc-9b6d-4ed63124da89","Type":"ContainerStarted","Data":"75e186849ab472b06510b38037d45625e486194e5caf39cee1406a4fb4c97a4d"} Feb 23 13:11:22.044705 master-0 kubenswrapper[17411]: I0223 13:11:22.044583 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=3.044549604 podStartE2EDuration="3.044549604s" podCreationTimestamp="2026-02-23 13:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:22.039055424 +0000 UTC m=+275.466562111" watchObservedRunningTime="2026-02-23 13:11:22.044549604 +0000 UTC m=+275.472056241" Feb 23 13:11:22.262410 master-0 kubenswrapper[17411]: I0223 13:11:22.262295 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-774bdb5777-xk9gx"] Feb 23 13:11:22.263810 master-0 kubenswrapper[17411]: I0223 13:11:22.263782 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.266150 master-0 kubenswrapper[17411]: I0223 13:11:22.266105 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-582hf" Feb 23 13:11:22.272598 master-0 kubenswrapper[17411]: I0223 13:11:22.272484 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-774bdb5777-xk9gx"] Feb 23 13:11:22.457517 master-0 kubenswrapper[17411]: I0223 13:11:22.457433 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/811a6e4e-997c-4173-a3b4-8af94524ecea-webhook-certs\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.457858 master-0 kubenswrapper[17411]: I0223 13:11:22.457508 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5qk\" (UniqueName: \"kubernetes.io/projected/811a6e4e-997c-4173-a3b4-8af94524ecea-kube-api-access-rp5qk\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.559563 master-0 kubenswrapper[17411]: I0223 13:11:22.559357 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/811a6e4e-997c-4173-a3b4-8af94524ecea-webhook-certs\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.559563 master-0 kubenswrapper[17411]: I0223 13:11:22.559416 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp5qk\" (UniqueName: \"kubernetes.io/projected/811a6e4e-997c-4173-a3b4-8af94524ecea-kube-api-access-rp5qk\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.564317 master-0 kubenswrapper[17411]: I0223 13:11:22.564222 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/811a6e4e-997c-4173-a3b4-8af94524ecea-webhook-certs\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.576606 master-0 kubenswrapper[17411]: I0223 13:11:22.576517 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp5qk\" (UniqueName: \"kubernetes.io/projected/811a6e4e-997c-4173-a3b4-8af94524ecea-kube-api-access-rp5qk\") pod \"multus-admission-controller-774bdb5777-xk9gx\" (UID: \"811a6e4e-997c-4173-a3b4-8af94524ecea\") " pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.595607 master-0 kubenswrapper[17411]: I0223 13:11:22.595557 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" Feb 23 13:11:22.627298 master-0 kubenswrapper[17411]: E0223 13:11:22.626452 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:22.628313 master-0 kubenswrapper[17411]: E0223 13:11:22.628215 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:22.634790 master-0 kubenswrapper[17411]: E0223 13:11:22.634728 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:22.634888 master-0 kubenswrapper[17411]: E0223 13:11:22.634813 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:11:23.067583 master-0 kubenswrapper[17411]: I0223 13:11:23.067504 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-774bdb5777-xk9gx"] Feb 23 13:11:23.070236 master-0 kubenswrapper[17411]: W0223 13:11:23.070185 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod811a6e4e_997c_4173_a3b4_8af94524ecea.slice/crio-3effa30dd4e55b11c22c7b1704a2c05696839e4f68f2e7e096d8098683ca9daa WatchSource:0}: Error finding container 3effa30dd4e55b11c22c7b1704a2c05696839e4f68f2e7e096d8098683ca9daa: Status 404 returned error can't find the container with id 3effa30dd4e55b11c22c7b1704a2c05696839e4f68f2e7e096d8098683ca9daa Feb 23 13:11:24.021073 master-0 kubenswrapper[17411]: I0223 13:11:24.020986 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" event={"ID":"811a6e4e-997c-4173-a3b4-8af94524ecea","Type":"ContainerStarted","Data":"ac658626e61408b87ba78255f43d934dc2b92e71dc8de3534dc17379f1fa377a"} Feb 23 13:11:24.021592 master-0 kubenswrapper[17411]: I0223 13:11:24.021570 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" event={"ID":"811a6e4e-997c-4173-a3b4-8af94524ecea","Type":"ContainerStarted","Data":"02d9980af8f59b34b5c8e950058c23053a548933139c9b7d32ab751b0510bad4"} Feb 23 13:11:24.021683 master-0 kubenswrapper[17411]: I0223 13:11:24.021668 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" event={"ID":"811a6e4e-997c-4173-a3b4-8af94524ecea","Type":"ContainerStarted","Data":"3effa30dd4e55b11c22c7b1704a2c05696839e4f68f2e7e096d8098683ca9daa"} Feb 23 13:11:24.052919 master-0 kubenswrapper[17411]: I0223 13:11:24.052797 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-774bdb5777-xk9gx" podStartSLOduration=2.052765906 podStartE2EDuration="2.052765906s" podCreationTimestamp="2026-02-23 13:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:24.047131842 +0000 UTC m=+277.474638459" watchObservedRunningTime="2026-02-23 13:11:24.052765906 +0000 UTC m=+277.480272543" Feb 23 13:11:24.120728 master-0 kubenswrapper[17411]: I0223 13:11:24.120665 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:11:24.121313 master-0 kubenswrapper[17411]: I0223 13:11:24.120956 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="multus-admission-controller" containerID="cri-o://e430df40036149c49e2ec2bcef759184c22db256e9c6a2afbd7778eeb4659b79" gracePeriod=30 Feb 23 13:11:24.121456 master-0 kubenswrapper[17411]: I0223 13:11:24.121432 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="kube-rbac-proxy" containerID="cri-o://66d7b9b29d7eeeb9236a56c762cde3c1a65c77718df7cdff3b00efe2346c3dc9" gracePeriod=30 Feb 23 13:11:25.032367 master-0 kubenswrapper[17411]: I0223 13:11:25.032249 17411 generic.go:334] "Generic (PLEG): container finished" podID="44b07d33-6e84-434e-9a14-431846620968" containerID="66d7b9b29d7eeeb9236a56c762cde3c1a65c77718df7cdff3b00efe2346c3dc9" exitCode=0 Feb 23 13:11:25.032623 master-0 kubenswrapper[17411]: I0223 13:11:25.032431 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerDied","Data":"66d7b9b29d7eeeb9236a56c762cde3c1a65c77718df7cdff3b00efe2346c3dc9"} Feb 23 13:11:25.516175 master-0 kubenswrapper[17411]: I0223 13:11:25.516097 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:11:25.516812 master-0 kubenswrapper[17411]: I0223 13:11:25.516482 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" gracePeriod=30 Feb 23 13:11:25.516812 master-0 kubenswrapper[17411]: I0223 13:11:25.516571 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" gracePeriod=30 Feb 23 13:11:25.516812 master-0 kubenswrapper[17411]: I0223 13:11:25.516714 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" gracePeriod=30 Feb 23 13:11:25.516958 master-0 kubenswrapper[17411]: I0223 13:11:25.516806 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" gracePeriod=30 Feb 23 13:11:25.517548 master-0 kubenswrapper[17411]: I0223 13:11:25.516661 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" gracePeriod=30 Feb 23 13:11:25.521062 master-0 kubenswrapper[17411]: I0223 13:11:25.521003 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:11:25.521926 master-0 kubenswrapper[17411]: E0223 13:11:25.521874 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 23 13:11:25.521926 master-0 kubenswrapper[17411]: I0223 13:11:25.521920 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 23 13:11:25.522046 master-0 kubenswrapper[17411]: E0223 13:11:25.521945 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:11:25.522046 master-0 kubenswrapper[17411]: I0223 13:11:25.521957 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527689 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527769 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527791 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527804 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527822 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527834 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527854 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527865 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527911 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527923 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: E0223 13:11:25.527953 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 23 13:11:25.528401 master-0 kubenswrapper[17411]: I0223 13:11:25.527965 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528524 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528562 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528577 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528614 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528633 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 23 13:11:25.528906 master-0 kubenswrapper[17411]: I0223 13:11:25.528657 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 23 13:11:25.537571 master-0 kubenswrapper[17411]: I0223 13:11:25.529157 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 23 13:11:25.537571 master-0 kubenswrapper[17411]: I0223 13:11:25.529244 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 23 13:11:25.615637 master-0 kubenswrapper[17411]: I0223 13:11:25.615571 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.615902 master-0 kubenswrapper[17411]: I0223 13:11:25.615662 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.615902 master-0 kubenswrapper[17411]: I0223 13:11:25.615699 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.615902 master-0 kubenswrapper[17411]: I0223 13:11:25.615797 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.615902 master-0 kubenswrapper[17411]: I0223 13:11:25.615847 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.616192 master-0 kubenswrapper[17411]: I0223 13:11:25.616114 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717108 master-0 kubenswrapper[17411]: I0223 13:11:25.717052 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717210 master-0 kubenswrapper[17411]: I0223 13:11:25.717121 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717210 master-0 kubenswrapper[17411]: I0223 13:11:25.717174 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717460 master-0 kubenswrapper[17411]: I0223 13:11:25.717317 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717460 master-0 kubenswrapper[17411]: I0223 13:11:25.717325 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717460 master-0 kubenswrapper[17411]: I0223 13:11:25.717384 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717460 master-0 kubenswrapper[17411]: I0223 13:11:25.717455 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717621 master-0 kubenswrapper[17411]: I0223 13:11:25.717484 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717621 master-0 kubenswrapper[17411]: I0223 13:11:25.717506 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717698 master-0 kubenswrapper[17411]: I0223 13:11:25.717626 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717742 master-0 kubenswrapper[17411]: I0223 13:11:25.717701 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:25.717844 master-0 kubenswrapper[17411]: I0223 13:11:25.717803 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 23 13:11:26.044841 master-0 kubenswrapper[17411]: I0223 13:11:26.044762 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:11:26.046375 master-0 kubenswrapper[17411]: I0223 13:11:26.046331 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:11:26.048779 master-0 kubenswrapper[17411]: I0223 13:11:26.048723 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" exitCode=2 Feb 23 13:11:26.048779 master-0 kubenswrapper[17411]: I0223 13:11:26.048766 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" exitCode=0 Feb 23 13:11:26.048779 master-0 kubenswrapper[17411]: I0223 13:11:26.048775 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" exitCode=2 Feb 23 13:11:32.626869 master-0 kubenswrapper[17411]: E0223 13:11:32.626707 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:32.628875 master-0 kubenswrapper[17411]: E0223 13:11:32.628816 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:32.630526 master-0 kubenswrapper[17411]: E0223 13:11:32.630478 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:32.630526 master-0 kubenswrapper[17411]: E0223 13:11:32.630517 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:11:34.104448 master-0 kubenswrapper[17411]: E0223 13:11:34.104336 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" Feb 23 13:11:34.127005 master-0 kubenswrapper[17411]: I0223 13:11:34.126940 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:11:34.537859 master-0 kubenswrapper[17411]: I0223 13:11:34.537780 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_72fb1770-7d0c-4c92-9f0b-3139f27510ca/installer/0.log" Feb 23 13:11:34.537859 master-0 kubenswrapper[17411]: I0223 13:11:34.537862 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:34.707984 master-0 kubenswrapper[17411]: I0223 13:11:34.707883 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock\") pod \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " Feb 23 13:11:34.708693 master-0 kubenswrapper[17411]: I0223 13:11:34.708041 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock" (OuterVolumeSpecName: "var-lock") pod "72fb1770-7d0c-4c92-9f0b-3139f27510ca" (UID: "72fb1770-7d0c-4c92-9f0b-3139f27510ca"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:34.708693 master-0 kubenswrapper[17411]: I0223 13:11:34.708648 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access\") pod \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " Feb 23 13:11:34.709413 master-0 kubenswrapper[17411]: I0223 13:11:34.709372 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir\") pod \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\" (UID: \"72fb1770-7d0c-4c92-9f0b-3139f27510ca\") " Feb 23 13:11:34.709464 master-0 kubenswrapper[17411]: I0223 13:11:34.709429 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72fb1770-7d0c-4c92-9f0b-3139f27510ca" (UID: "72fb1770-7d0c-4c92-9f0b-3139f27510ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:34.710064 master-0 kubenswrapper[17411]: I0223 13:11:34.710020 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:34.710064 master-0 kubenswrapper[17411]: I0223 13:11:34.710055 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:34.713967 master-0 kubenswrapper[17411]: I0223 13:11:34.713882 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72fb1770-7d0c-4c92-9f0b-3139f27510ca" (UID: "72fb1770-7d0c-4c92-9f0b-3139f27510ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:34.812412 master-0 kubenswrapper[17411]: I0223 13:11:34.812323 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72fb1770-7d0c-4c92-9f0b-3139f27510ca-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:35.139980 master-0 kubenswrapper[17411]: I0223 13:11:35.139769 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_72fb1770-7d0c-4c92-9f0b-3139f27510ca/installer/0.log" Feb 23 13:11:35.139980 master-0 kubenswrapper[17411]: I0223 13:11:35.139852 17411 generic.go:334] "Generic (PLEG): container finished" podID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" containerID="af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455" exitCode=1 Feb 23 13:11:35.139980 master-0 kubenswrapper[17411]: I0223 13:11:35.139895 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"72fb1770-7d0c-4c92-9f0b-3139f27510ca","Type":"ContainerDied","Data":"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455"} Feb 23 13:11:35.139980 master-0 kubenswrapper[17411]: I0223 13:11:35.139939 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"72fb1770-7d0c-4c92-9f0b-3139f27510ca","Type":"ContainerDied","Data":"6e09d64343b7a79a552f30e89f9078abd341790160d84669d82dc2e30bb6a2cc"} Feb 23 13:11:35.139980 master-0 kubenswrapper[17411]: I0223 13:11:35.139961 17411 scope.go:117] "RemoveContainer" containerID="af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455" Feb 23 13:11:35.141120 master-0 kubenswrapper[17411]: I0223 13:11:35.140020 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 23 13:11:35.165888 master-0 kubenswrapper[17411]: I0223 13:11:35.165814 17411 scope.go:117] "RemoveContainer" containerID="af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455" Feb 23 13:11:35.166519 master-0 kubenswrapper[17411]: E0223 13:11:35.166474 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455\": container with ID starting with af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455 not found: ID does not exist" containerID="af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455" Feb 23 13:11:35.166579 master-0 kubenswrapper[17411]: I0223 13:11:35.166521 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455"} err="failed to get container status \"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455\": rpc error: code = NotFound desc = could not find container \"af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455\": container with ID starting with af493d281139459c3a7ec7202daa6049bd23d64168929976a4df112f9cc9b455 not found: ID does not exist" Feb 23 13:11:36.946474 master-0 kubenswrapper[17411]: E0223 13:11:36.946342 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:11:37.559044 master-0 kubenswrapper[17411]: I0223 13:11:37.558933 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:11:37.561991 master-0 kubenswrapper[17411]: I0223 13:11:37.561927 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:11:37.731415 master-0 kubenswrapper[17411]: I0223 13:11:37.731325 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wnzv6" Feb 23 13:11:37.739024 master-0 kubenswrapper[17411]: I0223 13:11:37.738980 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:11:39.148597 master-0 kubenswrapper[17411]: E0223 13:11:39.148509 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" Feb 23 13:11:39.175754 master-0 kubenswrapper[17411]: I0223 13:11:39.175693 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:11:40.185533 master-0 kubenswrapper[17411]: I0223 13:11:40.185408 17411 generic.go:334] "Generic (PLEG): container finished" podID="9e0e0f7e-b725-4aae-8180-024b699386d5" containerID="9c8b9fbf1cbf9e1003e0b8ccc584b33fb92b0bc5724aa5fc538574be059a308e" exitCode=0 Feb 23 13:11:40.185533 master-0 kubenswrapper[17411]: I0223 13:11:40.185482 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"9e0e0f7e-b725-4aae-8180-024b699386d5","Type":"ContainerDied","Data":"9c8b9fbf1cbf9e1003e0b8ccc584b33fb92b0bc5724aa5fc538574be059a308e"} Feb 23 13:11:41.197711 master-0 kubenswrapper[17411]: I0223 13:11:41.197649 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:11:41.198904 master-0 kubenswrapper[17411]: I0223 13:11:41.197719 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="a6bd5c98100900ff484d9ecc07c3575ef2dfde242a0ba0ee9c6ef45ff1a27bdb" exitCode=1 Feb 23 13:11:41.198904 master-0 kubenswrapper[17411]: I0223 13:11:41.197888 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"a6bd5c98100900ff484d9ecc07c3575ef2dfde242a0ba0ee9c6ef45ff1a27bdb"} Feb 23 13:11:41.198904 master-0 kubenswrapper[17411]: I0223 13:11:41.198815 17411 scope.go:117] "RemoveContainer" containerID="a6bd5c98100900ff484d9ecc07c3575ef2dfde242a0ba0ee9c6ef45ff1a27bdb" Feb 23 13:11:41.492737 master-0 kubenswrapper[17411]: I0223 13:11:41.492659 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:11:41.540863 master-0 kubenswrapper[17411]: I0223 13:11:41.540784 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir\") pod \"9e0e0f7e-b725-4aae-8180-024b699386d5\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " Feb 23 13:11:41.541294 master-0 kubenswrapper[17411]: I0223 13:11:41.540967 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access\") pod \"9e0e0f7e-b725-4aae-8180-024b699386d5\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " Feb 23 13:11:41.541294 master-0 kubenswrapper[17411]: I0223 13:11:41.541026 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock\") pod \"9e0e0f7e-b725-4aae-8180-024b699386d5\" (UID: \"9e0e0f7e-b725-4aae-8180-024b699386d5\") " Feb 23 13:11:41.541294 master-0 kubenswrapper[17411]: I0223 13:11:41.541186 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock" (OuterVolumeSpecName: "var-lock") pod "9e0e0f7e-b725-4aae-8180-024b699386d5" (UID: "9e0e0f7e-b725-4aae-8180-024b699386d5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:41.541438 master-0 kubenswrapper[17411]: I0223 13:11:41.541420 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:41.542339 master-0 kubenswrapper[17411]: I0223 13:11:41.540935 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9e0e0f7e-b725-4aae-8180-024b699386d5" (UID: "9e0e0f7e-b725-4aae-8180-024b699386d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:41.545769 master-0 kubenswrapper[17411]: I0223 13:11:41.545684 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9e0e0f7e-b725-4aae-8180-024b699386d5" (UID: "9e0e0f7e-b725-4aae-8180-024b699386d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:41.642531 master-0 kubenswrapper[17411]: I0223 13:11:41.642460 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0e0f7e-b725-4aae-8180-024b699386d5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:41.642531 master-0 kubenswrapper[17411]: I0223 13:11:41.642507 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0e0f7e-b725-4aae-8180-024b699386d5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:42.208142 master-0 kubenswrapper[17411]: I0223 13:11:42.208064 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:11:42.208817 master-0 kubenswrapper[17411]: I0223 13:11:42.208317 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"6038b47b20500295b07b50ea89a301874d951b7f4b3a978dab3e4e44820c0ac7"} Feb 23 13:11:42.210061 master-0 kubenswrapper[17411]: I0223 13:11:42.210024 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"9e0e0f7e-b725-4aae-8180-024b699386d5","Type":"ContainerDied","Data":"eb368b6194084cec835b21f0719a65815da0a901d09f69f4ed986212e9cb21cd"} Feb 23 13:11:42.210061 master-0 kubenswrapper[17411]: I0223 13:11:42.210055 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb368b6194084cec835b21f0719a65815da0a901d09f69f4ed986212e9cb21cd" Feb 23 13:11:42.210197 master-0 kubenswrapper[17411]: I0223 13:11:42.210120 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 23 13:11:42.627060 master-0 kubenswrapper[17411]: E0223 13:11:42.626918 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:42.629173 master-0 kubenswrapper[17411]: E0223 13:11:42.629094 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:42.631043 master-0 kubenswrapper[17411]: E0223 13:11:42.631001 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:11:42.631113 master-0 kubenswrapper[17411]: E0223 13:11:42.631044 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:11:42.863855 master-0 kubenswrapper[17411]: I0223 13:11:42.863699 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:11:42.865880 master-0 kubenswrapper[17411]: I0223 13:11:42.865822 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:11:43.079796 master-0 kubenswrapper[17411]: I0223 13:11:43.079727 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-54m2k" Feb 23 13:11:43.087707 master-0 kubenswrapper[17411]: I0223 13:11:43.087618 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:11:43.218785 master-0 kubenswrapper[17411]: I0223 13:11:43.218742 17411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" exitCode=1 Feb 23 13:11:43.219222 master-0 kubenswrapper[17411]: I0223 13:11:43.218790 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7"} Feb 23 13:11:43.219222 master-0 kubenswrapper[17411]: I0223 13:11:43.218834 17411 scope.go:117] "RemoveContainer" containerID="fd8a73b94af97a6ee5fd332de6ff901ee87339c2669fee29463cd1d6a2935792" Feb 23 13:11:43.219985 master-0 kubenswrapper[17411]: I0223 13:11:43.219912 17411 scope.go:117] "RemoveContainer" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" Feb 23 13:11:43.220378 master-0 kubenswrapper[17411]: E0223 13:11:43.220318 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(56c3cb71c9851003c8de7e7c5db4b87e)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" Feb 23 13:11:45.937751 master-0 kubenswrapper[17411]: I0223 13:11:45.937497 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:11:45.937751 master-0 kubenswrapper[17411]: I0223 13:11:45.937752 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:11:45.944568 master-0 kubenswrapper[17411]: I0223 13:11:45.944521 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:11:46.072406 master-0 kubenswrapper[17411]: I0223 13:11:46.072352 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-w868k_e3516f78-36c2-4b5e-a265-96eb305235f9/kube-multus-additional-cni-plugins/0.log" Feb 23 13:11:46.072576 master-0 kubenswrapper[17411]: I0223 13:11:46.072442 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:46.130064 master-0 kubenswrapper[17411]: I0223 13:11:46.129981 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vg9x\" (UniqueName: \"kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x\") pod \"e3516f78-36c2-4b5e-a265-96eb305235f9\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " Feb 23 13:11:46.130064 master-0 kubenswrapper[17411]: I0223 13:11:46.130032 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist\") pod \"e3516f78-36c2-4b5e-a265-96eb305235f9\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " Feb 23 13:11:46.130538 master-0 kubenswrapper[17411]: I0223 13:11:46.130159 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready\") pod \"e3516f78-36c2-4b5e-a265-96eb305235f9\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " Feb 23 13:11:46.130538 master-0 kubenswrapper[17411]: I0223 13:11:46.130481 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir\") pod \"e3516f78-36c2-4b5e-a265-96eb305235f9\" (UID: \"e3516f78-36c2-4b5e-a265-96eb305235f9\") " Feb 23 13:11:46.130739 master-0 kubenswrapper[17411]: I0223 13:11:46.130585 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready" (OuterVolumeSpecName: "ready") pod "e3516f78-36c2-4b5e-a265-96eb305235f9" (UID: "e3516f78-36c2-4b5e-a265-96eb305235f9"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:11:46.130739 master-0 kubenswrapper[17411]: I0223 13:11:46.130601 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "e3516f78-36c2-4b5e-a265-96eb305235f9" (UID: "e3516f78-36c2-4b5e-a265-96eb305235f9"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:46.130918 master-0 kubenswrapper[17411]: I0223 13:11:46.130834 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "e3516f78-36c2-4b5e-a265-96eb305235f9" (UID: "e3516f78-36c2-4b5e-a265-96eb305235f9"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:46.131344 master-0 kubenswrapper[17411]: I0223 13:11:46.131282 17411 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e3516f78-36c2-4b5e-a265-96eb305235f9-ready\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:46.131344 master-0 kubenswrapper[17411]: I0223 13:11:46.131309 17411 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e3516f78-36c2-4b5e-a265-96eb305235f9-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:46.131344 master-0 kubenswrapper[17411]: I0223 13:11:46.131323 17411 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e3516f78-36c2-4b5e-a265-96eb305235f9-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:46.133898 master-0 kubenswrapper[17411]: I0223 13:11:46.133587 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x" (OuterVolumeSpecName: "kube-api-access-2vg9x") pod "e3516f78-36c2-4b5e-a265-96eb305235f9" (UID: "e3516f78-36c2-4b5e-a265-96eb305235f9"). InnerVolumeSpecName "kube-api-access-2vg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:46.232999 master-0 kubenswrapper[17411]: I0223 13:11:46.232910 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vg9x\" (UniqueName: \"kubernetes.io/projected/e3516f78-36c2-4b5e-a265-96eb305235f9-kube-api-access-2vg9x\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:46.246040 master-0 kubenswrapper[17411]: I0223 13:11:46.245971 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-w868k_e3516f78-36c2-4b5e-a265-96eb305235f9/kube-multus-additional-cni-plugins/0.log" Feb 23 13:11:46.246310 master-0 kubenswrapper[17411]: I0223 13:11:46.246044 17411 generic.go:334] "Generic (PLEG): container finished" podID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" exitCode=137 Feb 23 13:11:46.246310 master-0 kubenswrapper[17411]: I0223 13:11:46.246139 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" event={"ID":"e3516f78-36c2-4b5e-a265-96eb305235f9","Type":"ContainerDied","Data":"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944"} Feb 23 13:11:46.246310 master-0 kubenswrapper[17411]: I0223 13:11:46.246192 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" event={"ID":"e3516f78-36c2-4b5e-a265-96eb305235f9","Type":"ContainerDied","Data":"1706af823399c2b9d2e90794a42ff0a2dda345f3465a9035e61145d8a55b0675"} Feb 23 13:11:46.246310 master-0 kubenswrapper[17411]: I0223 13:11:46.246201 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" Feb 23 13:11:46.246310 master-0 kubenswrapper[17411]: I0223 13:11:46.246217 17411 scope.go:117] "RemoveContainer" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" Feb 23 13:11:46.266112 master-0 kubenswrapper[17411]: I0223 13:11:46.265767 17411 scope.go:117] "RemoveContainer" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" Feb 23 13:11:46.266302 master-0 kubenswrapper[17411]: E0223 13:11:46.266237 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944\": container with ID starting with 75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944 not found: ID does not exist" containerID="75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944" Feb 23 13:11:46.266302 master-0 kubenswrapper[17411]: I0223 13:11:46.266281 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944"} err="failed to get container status \"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944\": rpc error: code = NotFound desc = could not find container \"75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944\": container with ID starting with 75f6ba38ff0c9863e3a9bbdc65852460cf96196ab71edd3f1f80059f3e540944 not found: ID does not exist" Feb 23 13:11:46.828869 master-0 kubenswrapper[17411]: I0223 13:11:46.828828 17411 kubelet.go:1505] "Image garbage collection succeeded" Feb 23 13:11:46.949354 master-0 kubenswrapper[17411]: E0223 13:11:46.947510 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:11:54.320732 master-0 kubenswrapper[17411]: I0223 13:11:54.320661 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-8hstp_44b07d33-6e84-434e-9a14-431846620968/multus-admission-controller/0.log" Feb 23 13:11:54.321325 master-0 kubenswrapper[17411]: I0223 13:11:54.320752 17411 generic.go:334] "Generic (PLEG): container finished" podID="44b07d33-6e84-434e-9a14-431846620968" containerID="e430df40036149c49e2ec2bcef759184c22db256e9c6a2afbd7778eeb4659b79" exitCode=137 Feb 23 13:11:54.321325 master-0 kubenswrapper[17411]: I0223 13:11:54.320798 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerDied","Data":"e430df40036149c49e2ec2bcef759184c22db256e9c6a2afbd7778eeb4659b79"} Feb 23 13:11:55.106184 master-0 kubenswrapper[17411]: I0223 13:11:55.106121 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-8hstp_44b07d33-6e84-434e-9a14-431846620968/multus-admission-controller/0.log" Feb 23 13:11:55.106437 master-0 kubenswrapper[17411]: I0223 13:11:55.106207 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:11:55.193200 master-0 kubenswrapper[17411]: I0223 13:11:55.193115 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") pod \"44b07d33-6e84-434e-9a14-431846620968\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " Feb 23 13:11:55.193486 master-0 kubenswrapper[17411]: I0223 13:11:55.193444 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") pod \"44b07d33-6e84-434e-9a14-431846620968\" (UID: \"44b07d33-6e84-434e-9a14-431846620968\") " Feb 23 13:11:55.197311 master-0 kubenswrapper[17411]: I0223 13:11:55.197226 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf" (OuterVolumeSpecName: "kube-api-access-jccjf") pod "44b07d33-6e84-434e-9a14-431846620968" (UID: "44b07d33-6e84-434e-9a14-431846620968"). InnerVolumeSpecName "kube-api-access-jccjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:55.197561 master-0 kubenswrapper[17411]: I0223 13:11:55.197505 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "44b07d33-6e84-434e-9a14-431846620968" (UID: "44b07d33-6e84-434e-9a14-431846620968"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:11:55.295541 master-0 kubenswrapper[17411]: I0223 13:11:55.295464 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jccjf\" (UniqueName: \"kubernetes.io/projected/44b07d33-6e84-434e-9a14-431846620968-kube-api-access-jccjf\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:55.295541 master-0 kubenswrapper[17411]: I0223 13:11:55.295538 17411 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44b07d33-6e84-434e-9a14-431846620968-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:55.356961 master-0 kubenswrapper[17411]: I0223 13:11:55.356748 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-8hstp_44b07d33-6e84-434e-9a14-431846620968/multus-admission-controller/0.log" Feb 23 13:11:55.356961 master-0 kubenswrapper[17411]: I0223 13:11:55.356852 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" event={"ID":"44b07d33-6e84-434e-9a14-431846620968","Type":"ContainerDied","Data":"f67140661bca80f0082006c33ba58847d3a949b7d72bea750ff23edb65986950"} Feb 23 13:11:55.356961 master-0 kubenswrapper[17411]: I0223 13:11:55.356915 17411 scope.go:117] "RemoveContainer" containerID="66d7b9b29d7eeeb9236a56c762cde3c1a65c77718df7cdff3b00efe2346c3dc9" Feb 23 13:11:55.357920 master-0 kubenswrapper[17411]: I0223 13:11:55.356919 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp" Feb 23 13:11:55.376612 master-0 kubenswrapper[17411]: I0223 13:11:55.376541 17411 scope.go:117] "RemoveContainer" containerID="e430df40036149c49e2ec2bcef759184c22db256e9c6a2afbd7778eeb4659b79" Feb 23 13:11:55.943323 master-0 kubenswrapper[17411]: I0223 13:11:55.943270 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:11:56.117205 master-0 kubenswrapper[17411]: I0223 13:11:56.117121 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:11:56.118604 master-0 kubenswrapper[17411]: I0223 13:11:56.118556 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:11:56.119355 master-0 kubenswrapper[17411]: I0223 13:11:56.119316 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 23 13:11:56.119805 master-0 kubenswrapper[17411]: I0223 13:11:56.119770 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 23 13:11:56.120948 master-0 kubenswrapper[17411]: I0223 13:11:56.120911 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:11:56.211124 master-0 kubenswrapper[17411]: I0223 13:11:56.210968 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211124 master-0 kubenswrapper[17411]: I0223 13:11:56.211041 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211124 master-0 kubenswrapper[17411]: I0223 13:11:56.211066 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211124 master-0 kubenswrapper[17411]: I0223 13:11:56.211119 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211165 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211234 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211280 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211298 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211325 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211317 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211418 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211493 master-0 kubenswrapper[17411]: I0223 13:11:56.211322 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211690 17411 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211708 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211721 17411 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211733 17411 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211743 17411 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.211822 master-0 kubenswrapper[17411]: I0223 13:11:56.211756 17411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:11:56.367201 master-0 kubenswrapper[17411]: I0223 13:11:56.367127 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 23 13:11:56.368587 master-0 kubenswrapper[17411]: I0223 13:11:56.368537 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 23 13:11:56.369999 master-0 kubenswrapper[17411]: I0223 13:11:56.369935 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 23 13:11:56.370739 master-0 kubenswrapper[17411]: I0223 13:11:56.370684 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 23 13:11:56.372264 master-0 kubenswrapper[17411]: I0223 13:11:56.372186 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" exitCode=137 Feb 23 13:11:56.372264 master-0 kubenswrapper[17411]: I0223 13:11:56.372223 17411 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" exitCode=137 Feb 23 13:11:56.372483 master-0 kubenswrapper[17411]: I0223 13:11:56.372380 17411 scope.go:117] "RemoveContainer" containerID="6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" Feb 23 13:11:56.373016 master-0 kubenswrapper[17411]: I0223 13:11:56.372922 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:11:56.406386 master-0 kubenswrapper[17411]: I0223 13:11:56.406349 17411 scope.go:117] "RemoveContainer" containerID="74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" Feb 23 13:11:56.429938 master-0 kubenswrapper[17411]: I0223 13:11:56.429889 17411 scope.go:117] "RemoveContainer" containerID="d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" Feb 23 13:11:56.450621 master-0 kubenswrapper[17411]: I0223 13:11:56.450524 17411 scope.go:117] "RemoveContainer" containerID="2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" Feb 23 13:11:56.473362 master-0 kubenswrapper[17411]: I0223 13:11:56.473236 17411 scope.go:117] "RemoveContainer" containerID="d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" Feb 23 13:11:56.490984 master-0 kubenswrapper[17411]: I0223 13:11:56.490932 17411 scope.go:117] "RemoveContainer" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" Feb 23 13:11:56.512223 master-0 kubenswrapper[17411]: I0223 13:11:56.512164 17411 scope.go:117] "RemoveContainer" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" Feb 23 13:11:56.539342 master-0 kubenswrapper[17411]: I0223 13:11:56.539280 17411 scope.go:117] "RemoveContainer" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" Feb 23 13:11:56.563229 master-0 kubenswrapper[17411]: I0223 13:11:56.563181 17411 scope.go:117] "RemoveContainer" containerID="6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" Feb 23 13:11:56.563867 master-0 kubenswrapper[17411]: E0223 13:11:56.563792 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c\": container with ID starting with 6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c not found: ID does not exist" containerID="6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" Feb 23 13:11:56.563930 master-0 kubenswrapper[17411]: I0223 13:11:56.563851 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c"} err="failed to get container status \"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c\": rpc error: code = NotFound desc = could not find container \"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c\": container with ID starting with 6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c not found: ID does not exist" Feb 23 13:11:56.563930 master-0 kubenswrapper[17411]: I0223 13:11:56.563893 17411 scope.go:117] "RemoveContainer" containerID="74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" Feb 23 13:11:56.564495 master-0 kubenswrapper[17411]: E0223 13:11:56.564426 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63\": container with ID starting with 74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63 not found: ID does not exist" containerID="74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" Feb 23 13:11:56.564564 master-0 kubenswrapper[17411]: I0223 13:11:56.564506 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63"} err="failed to get container status \"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63\": rpc error: code = NotFound desc = could not find container \"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63\": container with ID starting with 74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63 not found: ID does not exist" Feb 23 13:11:56.564615 master-0 kubenswrapper[17411]: I0223 13:11:56.564565 17411 scope.go:117] "RemoveContainer" containerID="d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" Feb 23 13:11:56.565262 master-0 kubenswrapper[17411]: E0223 13:11:56.565177 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422\": container with ID starting with d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422 not found: ID does not exist" containerID="d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" Feb 23 13:11:56.565336 master-0 kubenswrapper[17411]: I0223 13:11:56.565276 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422"} err="failed to get container status \"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422\": rpc error: code = NotFound desc = could not find container \"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422\": container with ID starting with d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422 not found: ID does not exist" Feb 23 13:11:56.565336 master-0 kubenswrapper[17411]: I0223 13:11:56.565319 17411 scope.go:117] "RemoveContainer" containerID="2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" Feb 23 13:11:56.565899 master-0 kubenswrapper[17411]: E0223 13:11:56.565833 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14\": container with ID starting with 2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14 not found: ID does not exist" containerID="2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" Feb 23 13:11:56.565899 master-0 kubenswrapper[17411]: I0223 13:11:56.565871 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14"} err="failed to get container status \"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14\": rpc error: code = NotFound desc = could not find container \"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14\": container with ID starting with 2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14 not found: ID does not exist" Feb 23 13:11:56.565899 master-0 kubenswrapper[17411]: I0223 13:11:56.565890 17411 scope.go:117] "RemoveContainer" containerID="d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" Feb 23 13:11:56.566612 master-0 kubenswrapper[17411]: E0223 13:11:56.566534 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791\": container with ID starting with d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791 not found: ID does not exist" containerID="d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" Feb 23 13:11:56.566725 master-0 kubenswrapper[17411]: I0223 13:11:56.566605 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791"} err="failed to get container status \"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791\": rpc error: code = NotFound desc = could not find container \"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791\": container with ID starting with d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791 not found: ID does not exist" Feb 23 13:11:56.566725 master-0 kubenswrapper[17411]: I0223 13:11:56.566650 17411 scope.go:117] "RemoveContainer" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" Feb 23 13:11:56.567169 master-0 kubenswrapper[17411]: E0223 13:11:56.567101 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036\": container with ID starting with 88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036 not found: ID does not exist" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" Feb 23 13:11:56.567169 master-0 kubenswrapper[17411]: I0223 13:11:56.567154 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036"} err="failed to get container status \"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036\": rpc error: code = NotFound desc = could not find container \"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036\": container with ID starting with 88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036 not found: ID does not exist" Feb 23 13:11:56.567359 master-0 kubenswrapper[17411]: I0223 13:11:56.567185 17411 scope.go:117] "RemoveContainer" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" Feb 23 13:11:56.567744 master-0 kubenswrapper[17411]: E0223 13:11:56.567680 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9\": container with ID starting with f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9 not found: ID does not exist" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" Feb 23 13:11:56.567832 master-0 kubenswrapper[17411]: I0223 13:11:56.567740 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9"} err="failed to get container status \"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9\": rpc error: code = NotFound desc = could not find container \"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9\": container with ID starting with f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9 not found: ID does not exist" Feb 23 13:11:56.567832 master-0 kubenswrapper[17411]: I0223 13:11:56.567793 17411 scope.go:117] "RemoveContainer" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" Feb 23 13:11:56.568327 master-0 kubenswrapper[17411]: E0223 13:11:56.568238 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6\": container with ID starting with b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6 not found: ID does not exist" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" Feb 23 13:11:56.568400 master-0 kubenswrapper[17411]: I0223 13:11:56.568321 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6"} err="failed to get container status \"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6\": rpc error: code = NotFound desc = could not find container \"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6\": container with ID starting with b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6 not found: ID does not exist" Feb 23 13:11:56.568400 master-0 kubenswrapper[17411]: I0223 13:11:56.568362 17411 scope.go:117] "RemoveContainer" containerID="6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c" Feb 23 13:11:56.568838 master-0 kubenswrapper[17411]: I0223 13:11:56.568763 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c"} err="failed to get container status \"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c\": rpc error: code = NotFound desc = could not find container \"6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c\": container with ID starting with 6f63625eb6b79d91aedca462e09982d866db0110375f8150ebc287f58a06e84c not found: ID does not exist" Feb 23 13:11:56.568912 master-0 kubenswrapper[17411]: I0223 13:11:56.568839 17411 scope.go:117] "RemoveContainer" containerID="74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63" Feb 23 13:11:56.569440 master-0 kubenswrapper[17411]: I0223 13:11:56.569367 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63"} err="failed to get container status \"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63\": rpc error: code = NotFound desc = could not find container \"74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63\": container with ID starting with 74b422ed06317e0be02214c4ab0cf3f7f9ceed0bbdd49f8e7237d443a9e40b63 not found: ID does not exist" Feb 23 13:11:56.569497 master-0 kubenswrapper[17411]: I0223 13:11:56.569440 17411 scope.go:117] "RemoveContainer" containerID="d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422" Feb 23 13:11:56.569911 master-0 kubenswrapper[17411]: I0223 13:11:56.569842 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422"} err="failed to get container status \"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422\": rpc error: code = NotFound desc = could not find container \"d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422\": container with ID starting with d02f2931955e87c445d327f58556345d71172716bb33224b5d7b725572d9a422 not found: ID does not exist" Feb 23 13:11:56.569983 master-0 kubenswrapper[17411]: I0223 13:11:56.569905 17411 scope.go:117] "RemoveContainer" containerID="2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14" Feb 23 13:11:56.570473 master-0 kubenswrapper[17411]: I0223 13:11:56.570409 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14"} err="failed to get container status \"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14\": rpc error: code = NotFound desc = could not find container \"2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14\": container with ID starting with 2d8dac33c935e2cb77806e098a844e25d8822e69320cdd68e4e31a42b5decb14 not found: ID does not exist" Feb 23 13:11:56.570549 master-0 kubenswrapper[17411]: I0223 13:11:56.570470 17411 scope.go:117] "RemoveContainer" containerID="d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791" Feb 23 13:11:56.570935 master-0 kubenswrapper[17411]: I0223 13:11:56.570871 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791"} err="failed to get container status \"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791\": rpc error: code = NotFound desc = could not find container \"d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791\": container with ID starting with d0f813134ea441b9f5c8cf50d93d509bf3979dab02468f215b5279f3760d4791 not found: ID does not exist" Feb 23 13:11:56.571004 master-0 kubenswrapper[17411]: I0223 13:11:56.570930 17411 scope.go:117] "RemoveContainer" containerID="88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036" Feb 23 13:11:56.571417 master-0 kubenswrapper[17411]: I0223 13:11:56.571357 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036"} err="failed to get container status \"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036\": rpc error: code = NotFound desc = could not find container \"88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036\": container with ID starting with 88045c3283a7874400db2aa0dd5ba92b3a3b82ba9d315133aed8f789e0b68036 not found: ID does not exist" Feb 23 13:11:56.571496 master-0 kubenswrapper[17411]: I0223 13:11:56.571414 17411 scope.go:117] "RemoveContainer" containerID="f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9" Feb 23 13:11:56.571944 master-0 kubenswrapper[17411]: I0223 13:11:56.571891 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9"} err="failed to get container status \"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9\": rpc error: code = NotFound desc = could not find container \"f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9\": container with ID starting with f8a9ccfcc9c3c1f60bcb646a7704eb48c129dfbd3bd93ff5e93fb3c1511046f9 not found: ID does not exist" Feb 23 13:11:56.571944 master-0 kubenswrapper[17411]: I0223 13:11:56.571934 17411 scope.go:117] "RemoveContainer" containerID="b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6" Feb 23 13:11:56.572406 master-0 kubenswrapper[17411]: I0223 13:11:56.572347 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6"} err="failed to get container status \"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6\": rpc error: code = NotFound desc = could not find container \"b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6\": container with ID starting with b6cea4f641445686b39186718b09eaa9e48995ffd6cc3634f2005c8def2afbe6 not found: ID does not exist" Feb 23 13:11:56.868752 master-0 kubenswrapper[17411]: I0223 13:11:56.868524 17411 scope.go:117] "RemoveContainer" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" Feb 23 13:11:56.879286 master-0 kubenswrapper[17411]: I0223 13:11:56.879180 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 23 13:11:56.948621 master-0 kubenswrapper[17411]: E0223 13:11:56.948227 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:11:57.383927 master-0 kubenswrapper[17411]: I0223 13:11:57.383828 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6"} Feb 23 13:11:59.553943 master-0 kubenswrapper[17411]: E0223 13:11:59.553612 17411 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1896e24418ad28e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:11:25.516523744 +0000 UTC m=+278.944030341,LastTimestamp:2026-02-23 13:11:25.516523744 +0000 UTC m=+278.944030341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:12:02.868661 master-0 kubenswrapper[17411]: I0223 13:12:02.868569 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:12:02.895018 master-0 kubenswrapper[17411]: I0223 13:12:02.894897 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:12:02.895018 master-0 kubenswrapper[17411]: I0223 13:12:02.894949 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:12:06.948789 master-0 kubenswrapper[17411]: E0223 13:12:06.948670 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:08.691809 master-0 kubenswrapper[17411]: E0223 13:12:08.691715 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:11:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:11:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:11:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:11:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:16.950199 master-0 kubenswrapper[17411]: E0223 13:12:16.950071 17411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:16.950199 master-0 kubenswrapper[17411]: I0223 13:12:16.950180 17411 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:12:18.692153 master-0 kubenswrapper[17411]: E0223 13:12:18.692014 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:19.557030 master-0 kubenswrapper[17411]: I0223 13:12:19.556974 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/1.log" Feb 23 13:12:19.558081 master-0 kubenswrapper[17411]: I0223 13:12:19.558024 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/0.log" Feb 23 13:12:19.558632 master-0 kubenswrapper[17411]: I0223 13:12:19.558591 17411 generic.go:334] "Generic (PLEG): container finished" podID="3d82f223-e28b-4917-8513-3ca5c6e9bff7" containerID="b3ddf54bf6f19c8296e0175ded46bf9b3d3f12dbbe1d4cee2713a7180fbe826e" exitCode=1 Feb 23 13:12:19.558694 master-0 kubenswrapper[17411]: I0223 13:12:19.558642 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerDied","Data":"b3ddf54bf6f19c8296e0175ded46bf9b3d3f12dbbe1d4cee2713a7180fbe826e"} Feb 23 13:12:19.558694 master-0 kubenswrapper[17411]: I0223 13:12:19.558684 17411 scope.go:117] "RemoveContainer" containerID="c1dd3ed6ae85552fa55579d176bf04ab4acb74f8741f6985ce9c654154b5172e" Feb 23 13:12:19.559704 master-0 kubenswrapper[17411]: I0223 13:12:19.559658 17411 scope.go:117] "RemoveContainer" containerID="b3ddf54bf6f19c8296e0175ded46bf9b3d3f12dbbe1d4cee2713a7180fbe826e" Feb 23 13:12:20.579211 master-0 kubenswrapper[17411]: I0223 13:12:20.579154 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-4wvxd_3d82f223-e28b-4917-8513-3ca5c6e9bff7/approver/1.log" Feb 23 13:12:20.579757 master-0 kubenswrapper[17411]: I0223 13:12:20.579708 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-4wvxd" event={"ID":"3d82f223-e28b-4917-8513-3ca5c6e9bff7","Type":"ContainerStarted","Data":"32af81901c73e31972891eefedecd980e3001fb4183de84cbf3c1443984142ea"} Feb 23 13:12:26.879820 master-0 kubenswrapper[17411]: I0223 13:12:26.879723 17411 status_manager.go:851] "Failed to get status for pod" podUID="18a83278819db2092fa26d8274eb3f00" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Feb 23 13:12:26.950640 master-0 kubenswrapper[17411]: E0223 13:12:26.950518 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 23 13:12:28.693313 master-0 kubenswrapper[17411]: E0223 13:12:28.693203 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:33.557179 master-0 kubenswrapper[17411]: E0223 13:12:33.556973 17411 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1896e24418ae78d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Killing,Message:Stopping container etcd-metrics,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:11:25.516609747 +0000 UTC m=+278.944116344,LastTimestamp:2026-02-23 13:11:25.516609747 +0000 UTC m=+278.944116344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:12:36.897805 master-0 kubenswrapper[17411]: E0223 13:12:36.897626 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:12:36.898703 master-0 kubenswrapper[17411]: I0223 13:12:36.898342 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 23 13:12:36.936020 master-0 kubenswrapper[17411]: W0223 13:12:36.935895 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-bca52f5edf413e0ebe0261cc655a1ba27dc207732da64352d00c9e3b39225ffa WatchSource:0}: Error finding container bca52f5edf413e0ebe0261cc655a1ba27dc207732da64352d00c9e3b39225ffa: Status 404 returned error can't find the container with id bca52f5edf413e0ebe0261cc655a1ba27dc207732da64352d00c9e3b39225ffa Feb 23 13:12:37.151635 master-0 kubenswrapper[17411]: E0223 13:12:37.151469 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 23 13:12:37.724183 master-0 kubenswrapper[17411]: I0223 13:12:37.724086 17411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="138a0e172cb694f7a889846476b281ee3ad0331f1bb6c96a2d9170b7a71729da" exitCode=0 Feb 23 13:12:37.724183 master-0 kubenswrapper[17411]: I0223 13:12:37.724192 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"138a0e172cb694f7a889846476b281ee3ad0331f1bb6c96a2d9170b7a71729da"} Feb 23 13:12:37.724662 master-0 kubenswrapper[17411]: I0223 13:12:37.724285 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"bca52f5edf413e0ebe0261cc655a1ba27dc207732da64352d00c9e3b39225ffa"} Feb 23 13:12:37.724819 master-0 kubenswrapper[17411]: I0223 13:12:37.724766 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:12:37.724819 master-0 kubenswrapper[17411]: I0223 13:12:37.724802 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: E0223 13:12:38.498861 17411 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56" Netns:"/var/run/netns/0941aca4-109e-49e7-9e97-2b1ad2728783" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: > Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: E0223 13:12:38.498973 17411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56" Netns:"/var/run/netns/0941aca4-109e-49e7-9e97-2b1ad2728783" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: E0223 13:12:38.498997 17411 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56" Netns:"/var/run/netns/0941aca4-109e-49e7-9e97-2b1ad2728783" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:38.499024 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:12:38.500551 master-0 kubenswrapper[17411]: E0223 13:12:38.499099 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(b0e437b4-e6fd-482f-91a2-f48b9f087321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(b0e437b4-e6fd-482f-91a2-f48b9f087321)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56\\\" Netns:\\\"/var/run/netns/0941aca4-109e-49e7-9e97-2b1ad2728783\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=8c33af5e57da70132fdb07bb797a746bd6b43ec80c40159319bee31bbf859e56;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" Feb 23 13:12:38.694642 master-0 kubenswrapper[17411]: E0223 13:12:38.694504 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:38.732808 master-0 kubenswrapper[17411]: I0223 13:12:38.732696 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:12:38.733779 master-0 kubenswrapper[17411]: I0223 13:12:38.733734 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:12:40.751822 master-0 kubenswrapper[17411]: I0223 13:12:40.751766 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_382f96d2-f66c-4adc-9b6d-4ed63124da89/installer/0.log" Feb 23 13:12:40.752567 master-0 kubenswrapper[17411]: I0223 13:12:40.751852 17411 generic.go:334] "Generic (PLEG): container finished" podID="382f96d2-f66c-4adc-9b6d-4ed63124da89" containerID="75e186849ab472b06510b38037d45625e486194e5caf39cee1406a4fb4c97a4d" exitCode=1 Feb 23 13:12:40.752567 master-0 kubenswrapper[17411]: I0223 13:12:40.751917 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"382f96d2-f66c-4adc-9b6d-4ed63124da89","Type":"ContainerDied","Data":"75e186849ab472b06510b38037d45625e486194e5caf39cee1406a4fb4c97a4d"} Feb 23 13:12:42.107022 master-0 kubenswrapper[17411]: I0223 13:12:42.106992 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_382f96d2-f66c-4adc-9b6d-4ed63124da89/installer/0.log" Feb 23 13:12:42.107635 master-0 kubenswrapper[17411]: I0223 13:12:42.107616 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:12:42.163869 master-0 kubenswrapper[17411]: I0223 13:12:42.163834 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir\") pod \"382f96d2-f66c-4adc-9b6d-4ed63124da89\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " Feb 23 13:12:42.164132 master-0 kubenswrapper[17411]: I0223 13:12:42.164101 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock\") pod \"382f96d2-f66c-4adc-9b6d-4ed63124da89\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " Feb 23 13:12:42.164442 master-0 kubenswrapper[17411]: I0223 13:12:42.164411 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access\") pod \"382f96d2-f66c-4adc-9b6d-4ed63124da89\" (UID: \"382f96d2-f66c-4adc-9b6d-4ed63124da89\") " Feb 23 13:12:42.164683 master-0 kubenswrapper[17411]: I0223 13:12:42.164082 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "382f96d2-f66c-4adc-9b6d-4ed63124da89" (UID: "382f96d2-f66c-4adc-9b6d-4ed63124da89"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:42.164796 master-0 kubenswrapper[17411]: I0223 13:12:42.164136 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock" (OuterVolumeSpecName: "var-lock") pod "382f96d2-f66c-4adc-9b6d-4ed63124da89" (UID: "382f96d2-f66c-4adc-9b6d-4ed63124da89"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:42.165328 master-0 kubenswrapper[17411]: I0223 13:12:42.165297 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:42.165603 master-0 kubenswrapper[17411]: I0223 13:12:42.165579 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/382f96d2-f66c-4adc-9b6d-4ed63124da89-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:42.169083 master-0 kubenswrapper[17411]: I0223 13:12:42.168880 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "382f96d2-f66c-4adc-9b6d-4ed63124da89" (UID: "382f96d2-f66c-4adc-9b6d-4ed63124da89"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:42.267416 master-0 kubenswrapper[17411]: I0223 13:12:42.267358 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/382f96d2-f66c-4adc-9b6d-4ed63124da89-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:12:42.767855 master-0 kubenswrapper[17411]: I0223 13:12:42.767808 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_382f96d2-f66c-4adc-9b6d-4ed63124da89/installer/0.log" Feb 23 13:12:42.768196 master-0 kubenswrapper[17411]: I0223 13:12:42.767874 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"382f96d2-f66c-4adc-9b6d-4ed63124da89","Type":"ContainerDied","Data":"948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae"} Feb 23 13:12:42.768196 master-0 kubenswrapper[17411]: I0223 13:12:42.767904 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948cc2a2055945d11e25dd026a1e35774b134b2d31df361e246a3e9606f15cae" Feb 23 13:12:42.768444 master-0 kubenswrapper[17411]: I0223 13:12:42.768360 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 23 13:12:43.927695 master-0 kubenswrapper[17411]: E0223 13:12:43.927629 17411 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:12:43.927695 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39" Netns:"/var/run/netns/7af2b4fd-0f4d-4dfd-b411-23cb7a97d35f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Feb 23 13:12:43.927695 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:43.927695 master-0 kubenswrapper[17411]: > Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: E0223 13:12:43.927710 17411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39" Netns:"/var/run/netns/7af2b4fd-0f4d-4dfd-b411-23cb7a97d35f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: E0223 13:12:43.927746 17411 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39" Netns:"/var/run/netns/7af2b4fd-0f4d-4dfd-b411-23cb7a97d35f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:12:43.928723 master-0 kubenswrapper[17411]: E0223 13:12:43.927840 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(c229faa3-6eb1-42d6-8e10-f4cadc952d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(c229faa3-6eb1-42d6-8e10-f4cadc952d17)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39\\\" Netns:\\\"/var/run/netns/7af2b4fd-0f4d-4dfd-b411-23cb7a97d35f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=18035a9a5b83b4f4ec22e6afc9b02101c8cac3922b0df7f2a08b266cf68f9e39;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" Feb 23 13:12:44.785316 master-0 kubenswrapper[17411]: I0223 13:12:44.785218 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:12:44.786721 master-0 kubenswrapper[17411]: I0223 13:12:44.786669 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:12:47.553541 master-0 kubenswrapper[17411]: E0223 13:12:47.553306 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 23 13:12:48.695109 master-0 kubenswrapper[17411]: E0223 13:12:48.694985 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:12:48.695109 master-0 kubenswrapper[17411]: E0223 13:12:48.695066 17411 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:12:58.356238 master-0 kubenswrapper[17411]: E0223 13:12:58.356145 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 23 13:13:07.560420 master-0 kubenswrapper[17411]: E0223 13:13:07.559968 17411 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1896e24418af8f81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:11:25.516681089 +0000 UTC m=+278.944187746,LastTimestamp:2026-02-23 13:11:25.516681089 +0000 UTC m=+278.944187746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:13:09.958293 master-0 kubenswrapper[17411]: E0223 13:13:09.958102 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 23 13:13:11.728113 master-0 kubenswrapper[17411]: E0223 13:13:11.728029 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:13:13.023381 master-0 kubenswrapper[17411]: I0223 13:13:13.023234 17411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="36ef407dbfecebe011442b02b99d90f52a28815db706bcb817335ca010a0a154" exitCode=0 Feb 23 13:13:13.023381 master-0 kubenswrapper[17411]: I0223 13:13:13.023318 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"36ef407dbfecebe011442b02b99d90f52a28815db706bcb817335ca010a0a154"} Feb 23 13:13:13.024567 master-0 kubenswrapper[17411]: I0223 13:13:13.023926 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:13:13.024567 master-0 kubenswrapper[17411]: I0223 13:13:13.023954 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:13:14.921647 master-0 kubenswrapper[17411]: E0223 13:13:14.921549 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" Feb 23 13:13:15.041400 master-0 kubenswrapper[17411]: I0223 13:13:15.041323 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:13:16.053210 master-0 kubenswrapper[17411]: I0223 13:13:16.053141 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/config-sync-controllers/0.log" Feb 23 13:13:16.053793 master-0 kubenswrapper[17411]: I0223 13:13:16.053662 17411 generic.go:334] "Generic (PLEG): container finished" podID="0d7283ee-8959-44b6-83fb-b152510485eb" containerID="e30f446bb2714d380fa7909fd4a0293b5a66a259d785eaa0ff99a8d5b7fba280" exitCode=1 Feb 23 13:13:16.053793 master-0 kubenswrapper[17411]: I0223 13:13:16.053714 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerDied","Data":"e30f446bb2714d380fa7909fd4a0293b5a66a259d785eaa0ff99a8d5b7fba280"} Feb 23 13:13:16.054485 master-0 kubenswrapper[17411]: I0223 13:13:16.054452 17411 scope.go:117] "RemoveContainer" containerID="e30f446bb2714d380fa7909fd4a0293b5a66a259d785eaa0ff99a8d5b7fba280" Feb 23 13:13:16.334262 master-0 kubenswrapper[17411]: I0223 13:13:16.334169 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:13:16.337306 master-0 kubenswrapper[17411]: I0223 13:13:16.337212 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/679fabb5-a261-402e-b5be-8fe7f0da0ec8-trusted-ca\") pod \"console-operator-5df5ffc47c-zwmzz\" (UID: \"679fabb5-a261-402e-b5be-8fe7f0da0ec8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:13:16.545588 master-0 kubenswrapper[17411]: I0223 13:13:16.545498 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8bvc9" Feb 23 13:13:16.553725 master-0 kubenswrapper[17411]: I0223 13:13:16.553641 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:13:17.066430 master-0 kubenswrapper[17411]: I0223 13:13:17.066361 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/config-sync-controllers/0.log" Feb 23 13:13:17.067651 master-0 kubenswrapper[17411]: I0223 13:13:17.067601 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"ac6e615ba950366dd70a7675a7a9d738f505ddf79773ee6bfd0f4cbfbbe0127c"} Feb 23 13:13:19.092163 master-0 kubenswrapper[17411]: I0223 13:13:19.091962 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/1.log" Feb 23 13:13:19.093451 master-0 kubenswrapper[17411]: I0223 13:13:19.093405 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/0.log" Feb 23 13:13:19.093576 master-0 kubenswrapper[17411]: I0223 13:13:19.093470 17411 generic.go:334] "Generic (PLEG): container finished" podID="c0d6008c-6e09-4e61-83a5-60456ca90e1e" containerID="9a0997d75615489d4d91525d520b1f48b044636546aee09415313e7b839573b0" exitCode=1 Feb 23 13:13:19.093576 master-0 kubenswrapper[17411]: I0223 13:13:19.093512 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerDied","Data":"9a0997d75615489d4d91525d520b1f48b044636546aee09415313e7b839573b0"} Feb 23 13:13:19.093576 master-0 kubenswrapper[17411]: I0223 13:13:19.093552 17411 scope.go:117] "RemoveContainer" containerID="49260b269ae6d09884492d00790a3a52d5e0644389747da3e51aa260e0b91b26" Feb 23 13:13:19.094766 master-0 kubenswrapper[17411]: I0223 13:13:19.094695 17411 scope.go:117] "RemoveContainer" containerID="9a0997d75615489d4d91525d520b1f48b044636546aee09415313e7b839573b0" Feb 23 13:13:20.103439 master-0 kubenswrapper[17411]: I0223 13:13:20.103367 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-j5hpl_c0d6008c-6e09-4e61-83a5-60456ca90e1e/manager/1.log" Feb 23 13:13:20.104025 master-0 kubenswrapper[17411]: I0223 13:13:20.103972 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" event={"ID":"c0d6008c-6e09-4e61-83a5-60456ca90e1e","Type":"ContainerStarted","Data":"89cdfce58c99440398f0b231ab4d6b1578bc03bc59bfdc2d0572742cc7b4af28"} Feb 23 13:13:20.104475 master-0 kubenswrapper[17411]: I0223 13:13:20.104431 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:13:23.160511 master-0 kubenswrapper[17411]: E0223 13:13:23.160416 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 23 13:13:26.894740 master-0 kubenswrapper[17411]: I0223 13:13:26.894563 17411 status_manager.go:851] "Failed to get status for pod" podUID="3d82f223-e28b-4917-8513-3ca5c6e9bff7" pod="openshift-network-node-identity/network-node-identity-4wvxd" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-4wvxd)" Feb 23 13:13:27.163850 master-0 kubenswrapper[17411]: I0223 13:13:27.163688 17411 generic.go:334] "Generic (PLEG): container finished" podID="1d953c37-1b74-4ce5-89cb-b3f53454fc57" containerID="00e189fb9a66fa8bfe8c8ab05aa3a818d35a806659732011b60d32cd72335a4c" exitCode=0 Feb 23 13:13:27.163850 master-0 kubenswrapper[17411]: I0223 13:13:27.163765 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerDied","Data":"00e189fb9a66fa8bfe8c8ab05aa3a818d35a806659732011b60d32cd72335a4c"} Feb 23 13:13:27.163850 master-0 kubenswrapper[17411]: I0223 13:13:27.163853 17411 scope.go:117] "RemoveContainer" containerID="611405a04dc23476e0102b383f4f0d51fbb39430cdde420d7a3d20790ecb0a3a" Feb 23 13:13:27.165735 master-0 kubenswrapper[17411]: I0223 13:13:27.165473 17411 scope.go:117] "RemoveContainer" containerID="00e189fb9a66fa8bfe8c8ab05aa3a818d35a806659732011b60d32cd72335a4c" Feb 23 13:13:27.930877 master-0 kubenswrapper[17411]: I0223 13:13:27.930829 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:13:28.175308 master-0 kubenswrapper[17411]: I0223 13:13:28.175174 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" event={"ID":"1d953c37-1b74-4ce5-89cb-b3f53454fc57","Type":"ContainerStarted","Data":"effb35629ccc781586e238519ef24368dc15def9d4cb7c683f959be01492a14a"} Feb 23 13:13:28.175678 master-0 kubenswrapper[17411]: I0223 13:13:28.175603 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:13:28.179173 master-0 kubenswrapper[17411]: I0223 13:13:28.179107 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-28zcz" Feb 23 13:13:28.181692 master-0 kubenswrapper[17411]: I0223 13:13:28.181629 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/1.log" Feb 23 13:13:28.182685 master-0 kubenswrapper[17411]: I0223 13:13:28.182645 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/0.log" Feb 23 13:13:28.183364 master-0 kubenswrapper[17411]: I0223 13:13:28.183304 17411 generic.go:334] "Generic (PLEG): container finished" podID="bfbb4d6d-7047-48cb-be03-97a57fc688e3" containerID="851d34e72cd075433d8cf4b69dc2fdf69944f4b7cdd7245de32f6eacad0a08da" exitCode=1 Feb 23 13:13:28.183785 master-0 kubenswrapper[17411]: I0223 13:13:28.183382 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerDied","Data":"851d34e72cd075433d8cf4b69dc2fdf69944f4b7cdd7245de32f6eacad0a08da"} Feb 23 13:13:28.183943 master-0 kubenswrapper[17411]: I0223 13:13:28.183922 17411 scope.go:117] "RemoveContainer" containerID="b8216c6629595ae79e53d792a20a769b60a06e1e5c09e5dc292d86cb2730407e" Feb 23 13:13:28.184659 master-0 kubenswrapper[17411]: I0223 13:13:28.184618 17411 scope.go:117] "RemoveContainer" containerID="851d34e72cd075433d8cf4b69dc2fdf69944f4b7cdd7245de32f6eacad0a08da" Feb 23 13:13:29.137615 master-0 kubenswrapper[17411]: E0223 13:13:29.137493 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:13:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:13:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:13:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:13:19Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:13:29.204573 master-0 kubenswrapper[17411]: I0223 13:13:29.204442 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-bckd6_bfbb4d6d-7047-48cb-be03-97a57fc688e3/manager/1.log" Feb 23 13:13:29.205390 master-0 kubenswrapper[17411]: I0223 13:13:29.205333 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" event={"ID":"bfbb4d6d-7047-48cb-be03-97a57fc688e3","Type":"ContainerStarted","Data":"f17b0a350e1b271c5adeb0dd74bb5acb50055321de17005e034e0f8084fe73ce"} Feb 23 13:13:29.206063 master-0 kubenswrapper[17411]: I0223 13:13:29.206026 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:13:31.576573 master-0 kubenswrapper[17411]: I0223 13:13:31.576473 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-j5hpl" Feb 23 13:13:32.228702 master-0 kubenswrapper[17411]: I0223 13:13:32.228622 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/2.log" Feb 23 13:13:32.229236 master-0 kubenswrapper[17411]: I0223 13:13:32.229118 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/1.log" Feb 23 13:13:32.229236 master-0 kubenswrapper[17411]: I0223 13:13:32.229170 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="fdf69ec24e1c6086e49f484fb8b8dd94cca3653e3ce3d1c63357917cb9333952" exitCode=1 Feb 23 13:13:32.229442 master-0 kubenswrapper[17411]: I0223 13:13:32.229225 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"fdf69ec24e1c6086e49f484fb8b8dd94cca3653e3ce3d1c63357917cb9333952"} Feb 23 13:13:32.229442 master-0 kubenswrapper[17411]: I0223 13:13:32.229340 17411 scope.go:117] "RemoveContainer" containerID="b344f0832b62956e749c09fccb690fc11d54040c9d919827bfbb6ce448268045" Feb 23 13:13:32.230279 master-0 kubenswrapper[17411]: I0223 13:13:32.230219 17411 scope.go:117] "RemoveContainer" containerID="fdf69ec24e1c6086e49f484fb8b8dd94cca3653e3ce3d1c63357917cb9333952" Feb 23 13:13:33.238376 master-0 kubenswrapper[17411]: I0223 13:13:33.238309 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/2.log" Feb 23 13:13:33.238376 master-0 kubenswrapper[17411]: I0223 13:13:33.238378 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a"} Feb 23 13:13:38.277852 master-0 kubenswrapper[17411]: I0223 13:13:38.277773 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/config-sync-controllers/0.log" Feb 23 13:13:38.278613 master-0 kubenswrapper[17411]: I0223 13:13:38.278333 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/cluster-cloud-controller-manager/0.log" Feb 23 13:13:38.278613 master-0 kubenswrapper[17411]: I0223 13:13:38.278383 17411 generic.go:334] "Generic (PLEG): container finished" podID="0d7283ee-8959-44b6-83fb-b152510485eb" containerID="44b7755ac7e8a439ff0fc3edb598f7964183e231ff745d6b5c721bfaa7e89066" exitCode=1 Feb 23 13:13:38.278613 master-0 kubenswrapper[17411]: I0223 13:13:38.278422 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerDied","Data":"44b7755ac7e8a439ff0fc3edb598f7964183e231ff745d6b5c721bfaa7e89066"} Feb 23 13:13:38.279028 master-0 kubenswrapper[17411]: I0223 13:13:38.278992 17411 scope.go:117] "RemoveContainer" containerID="44b7755ac7e8a439ff0fc3edb598f7964183e231ff745d6b5c721bfaa7e89066" Feb 23 13:13:39.138377 master-0 kubenswrapper[17411]: E0223 13:13:39.138232 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:13:39.287876 master-0 kubenswrapper[17411]: I0223 13:13:39.287767 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/config-sync-controllers/0.log" Feb 23 13:13:39.289407 master-0 kubenswrapper[17411]: I0223 13:13:39.289342 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f_0d7283ee-8959-44b6-83fb-b152510485eb/cluster-cloud-controller-manager/0.log" Feb 23 13:13:39.289493 master-0 kubenswrapper[17411]: I0223 13:13:39.289462 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-q9r7f" event={"ID":"0d7283ee-8959-44b6-83fb-b152510485eb","Type":"ContainerStarted","Data":"f0c96fcee33b0d89c57444fc9cbcdc0ee15312d7f4f805c37070513ea09ac9b6"} Feb 23 13:13:39.322171 master-0 kubenswrapper[17411]: E0223 13:13:39.322080 17411 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:13:39.322171 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f" Netns:"/var/run/netns/c8ad3ea0-383b-44b4-9163-53c9b151a706" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:39.322171 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:39.322171 master-0 kubenswrapper[17411]: > Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: E0223 13:13:39.322207 17411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f" Netns:"/var/run/netns/c8ad3ea0-383b-44b4-9163-53c9b151a706" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: E0223 13:13:39.322276 17411 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f" Netns:"/var/run/netns/c8ad3ea0-383b-44b4-9163-53c9b151a706" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:39.322456 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:13:39.322767 master-0 kubenswrapper[17411]: E0223 13:13:39.322472 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(b0e437b4-e6fd-482f-91a2-f48b9f087321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(b0e437b4-e6fd-482f-91a2-f48b9f087321)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b0e437b4-e6fd-482f-91a2-f48b9f087321_0(aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f\\\" Netns:\\\"/var/run/netns/c8ad3ea0-383b-44b4-9163-53c9b151a706\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=aa233e422c4f960456910a2b36898c85b512fba856e4908bd350a6eecba0f90f;K8S_POD_UID=b0e437b4-e6fd-482f-91a2-f48b9f087321\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/b0e437b4-e6fd-482f-91a2-f48b9f087321]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" Feb 23 13:13:39.561961 master-0 kubenswrapper[17411]: E0223 13:13:39.561567 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:13:40.298335 master-0 kubenswrapper[17411]: I0223 13:13:40.298219 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:13:40.299550 master-0 kubenswrapper[17411]: I0223 13:13:40.299506 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:13:41.564199 master-0 kubenswrapper[17411]: E0223 13:13:41.564028 17411 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1896e24418ae931d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:11:25.516616477 +0000 UTC m=+278.944123074,LastTimestamp:2026-02-23 13:11:25.516616477 +0000 UTC m=+278.944123074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:13:41.916781 master-0 kubenswrapper[17411]: I0223 13:13:41.916576 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-bckd6" Feb 23 13:13:45.615548 master-0 kubenswrapper[17411]: E0223 13:13:45.615452 17411 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:13:45.615548 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d" Netns:"/var/run/netns/45c62913-80db-4418-a75d-f71b238c6630" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:45.615548 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:45.615548 master-0 kubenswrapper[17411]: > Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: E0223 13:13:45.615591 17411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d" Netns:"/var/run/netns/45c62913-80db-4418-a75d-f71b238c6630" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: E0223 13:13:45.615638 17411 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d" Netns:"/var/run/netns/45c62913-80db-4418-a75d-f71b238c6630" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: > pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:13:45.616510 master-0 kubenswrapper[17411]: E0223 13:13:45.615784 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(c229faa3-6eb1-42d6-8e10-f4cadc952d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(c229faa3-6eb1-42d6-8e10-f4cadc952d17)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_c229faa3-6eb1-42d6-8e10-f4cadc952d17_0(dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d\\\" Netns:\\\"/var/run/netns/45c62913-80db-4418-a75d-f71b238c6630\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=dba753b6527a1bfaf9da0f6d0e3995028b407d10549aa7946e15591340d6321d;K8S_POD_UID=c229faa3-6eb1-42d6-8e10-f4cadc952d17\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/c229faa3-6eb1-42d6-8e10-f4cadc952d17]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" Feb 23 13:13:46.395448 master-0 kubenswrapper[17411]: I0223 13:13:46.395348 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:13:46.396547 master-0 kubenswrapper[17411]: I0223 13:13:46.396476 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:13:47.027229 master-0 kubenswrapper[17411]: E0223 13:13:47.027155 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:13:47.403884 master-0 kubenswrapper[17411]: I0223 13:13:47.403807 17411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="a6d9fa15caf4978962c0fd0f30dff553f9e1ce8b8724f621d915d9304484510e" exitCode=0 Feb 23 13:13:47.403884 master-0 kubenswrapper[17411]: I0223 13:13:47.403884 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"a6d9fa15caf4978962c0fd0f30dff553f9e1ce8b8724f621d915d9304484510e"} Feb 23 13:13:47.404360 master-0 kubenswrapper[17411]: I0223 13:13:47.404319 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:13:47.404360 master-0 kubenswrapper[17411]: I0223 13:13:47.404365 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:13:48.415691 master-0 kubenswrapper[17411]: I0223 13:13:48.415608 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-6b92p_3d85c030-4931-42d7-afd6-72b41789aea8/cluster-autoscaler-operator/0.log" Feb 23 13:13:48.416695 master-0 kubenswrapper[17411]: I0223 13:13:48.416197 17411 generic.go:334] "Generic (PLEG): container finished" podID="3d85c030-4931-42d7-afd6-72b41789aea8" containerID="23f3545fe3ac985d9c6eaafd117cfe2052081891034bfc99e19a78ed966dc30b" exitCode=255 Feb 23 13:13:48.416695 master-0 kubenswrapper[17411]: I0223 13:13:48.416339 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" event={"ID":"3d85c030-4931-42d7-afd6-72b41789aea8","Type":"ContainerDied","Data":"23f3545fe3ac985d9c6eaafd117cfe2052081891034bfc99e19a78ed966dc30b"} Feb 23 13:13:48.417549 master-0 kubenswrapper[17411]: I0223 13:13:48.417481 17411 scope.go:117] "RemoveContainer" containerID="23f3545fe3ac985d9c6eaafd117cfe2052081891034bfc99e19a78ed966dc30b" Feb 23 13:13:49.138867 master-0 kubenswrapper[17411]: E0223 13:13:49.138762 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:13:49.427254 master-0 kubenswrapper[17411]: I0223 13:13:49.427074 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-6b92p_3d85c030-4931-42d7-afd6-72b41789aea8/cluster-autoscaler-operator/0.log" Feb 23 13:13:49.428011 master-0 kubenswrapper[17411]: I0223 13:13:49.427975 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-6b92p" event={"ID":"3d85c030-4931-42d7-afd6-72b41789aea8","Type":"ContainerStarted","Data":"d328eddab856871d7a5d1d2299e6e741c29683ce8ec60132cb55ba6f8fb9eee9"} Feb 23 13:13:49.431226 master-0 kubenswrapper[17411]: I0223 13:13:49.431169 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-48xpf_430cb782-18d5-4429-99ef-29d3dca0d803/machine-approver-controller/0.log" Feb 23 13:13:49.432061 master-0 kubenswrapper[17411]: I0223 13:13:49.432000 17411 generic.go:334] "Generic (PLEG): container finished" podID="430cb782-18d5-4429-99ef-29d3dca0d803" containerID="09c37fb183628456535e9d994f19979ed54eaad90335c36b799938ed6f869ef3" exitCode=255 Feb 23 13:13:49.432131 master-0 kubenswrapper[17411]: I0223 13:13:49.432074 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" event={"ID":"430cb782-18d5-4429-99ef-29d3dca0d803","Type":"ContainerDied","Data":"09c37fb183628456535e9d994f19979ed54eaad90335c36b799938ed6f869ef3"} Feb 23 13:13:49.432956 master-0 kubenswrapper[17411]: I0223 13:13:49.432924 17411 scope.go:117] "RemoveContainer" containerID="09c37fb183628456535e9d994f19979ed54eaad90335c36b799938ed6f869ef3" Feb 23 13:13:50.448653 master-0 kubenswrapper[17411]: I0223 13:13:50.448592 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-48xpf_430cb782-18d5-4429-99ef-29d3dca0d803/machine-approver-controller/0.log" Feb 23 13:13:50.449564 master-0 kubenswrapper[17411]: I0223 13:13:50.449530 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-48xpf" event={"ID":"430cb782-18d5-4429-99ef-29d3dca0d803","Type":"ContainerStarted","Data":"aaf84f425b7765315ef8fbcc681a419e6eca9ef48e3f0f50d0a991a46f588dda"} Feb 23 13:13:52.470638 master-0 kubenswrapper[17411]: I0223 13:13:52.470558 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/1.log" Feb 23 13:13:52.471733 master-0 kubenswrapper[17411]: I0223 13:13:52.471667 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/0.log" Feb 23 13:13:52.471809 master-0 kubenswrapper[17411]: I0223 13:13:52.471761 17411 generic.go:334] "Generic (PLEG): container finished" podID="4bc22782-a369-48aa-a0e8-c1c63ffa3053" containerID="9e38aa42b3fe61c9c1cf925b3c085230297f114549a309d0dbbb04d8b9cb3c23" exitCode=1 Feb 23 13:13:52.471859 master-0 kubenswrapper[17411]: I0223 13:13:52.471816 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerDied","Data":"9e38aa42b3fe61c9c1cf925b3c085230297f114549a309d0dbbb04d8b9cb3c23"} Feb 23 13:13:52.471893 master-0 kubenswrapper[17411]: I0223 13:13:52.471873 17411 scope.go:117] "RemoveContainer" containerID="0a361025f0f0b4dd3a2d9d3bc39a5bc567c08f5ded2a78f736405795214ce703" Feb 23 13:13:52.472817 master-0 kubenswrapper[17411]: I0223 13:13:52.472779 17411 scope.go:117] "RemoveContainer" containerID="9e38aa42b3fe61c9c1cf925b3c085230297f114549a309d0dbbb04d8b9cb3c23" Feb 23 13:13:53.482078 master-0 kubenswrapper[17411]: I0223 13:13:53.482006 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-rvz4w_4bc22782-a369-48aa-a0e8-c1c63ffa3053/control-plane-machine-set-operator/1.log" Feb 23 13:13:53.482078 master-0 kubenswrapper[17411]: I0223 13:13:53.482085 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-rvz4w" event={"ID":"4bc22782-a369-48aa-a0e8-c1c63ffa3053","Type":"ContainerStarted","Data":"ee8b4ddf8ba08b2edbc4bf0389f4de652e7be4705915b95fc7a9656086a6cc3e"} Feb 23 13:13:55.938326 master-0 kubenswrapper[17411]: I0223 13:13:55.938223 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Feb 23 13:13:55.938326 master-0 kubenswrapper[17411]: I0223 13:13:55.938308 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 23 13:13:55.939334 master-0 kubenswrapper[17411]: I0223 13:13:55.938960 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Feb 23 13:13:55.939334 master-0 kubenswrapper[17411]: I0223 13:13:55.938996 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 23 13:13:56.511176 master-0 kubenswrapper[17411]: I0223 13:13:56.511126 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:13:56.511954 master-0 kubenswrapper[17411]: I0223 13:13:56.511912 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="b398a9f3c00c8a1ed9831c18d667495d4a0f74359778ab7ea6c74a83ae93e1ea" exitCode=0 Feb 23 13:13:56.512114 master-0 kubenswrapper[17411]: I0223 13:13:56.512047 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"b398a9f3c00c8a1ed9831c18d667495d4a0f74359778ab7ea6c74a83ae93e1ea"} Feb 23 13:13:56.513598 master-0 kubenswrapper[17411]: I0223 13:13:56.513036 17411 scope.go:117] "RemoveContainer" containerID="b398a9f3c00c8a1ed9831c18d667495d4a0f74359778ab7ea6c74a83ae93e1ea" Feb 23 13:13:56.516338 master-0 kubenswrapper[17411]: I0223 13:13:56.516059 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/2.log" Feb 23 13:13:56.517594 master-0 kubenswrapper[17411]: I0223 13:13:56.517441 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/1.log" Feb 23 13:13:56.518447 master-0 kubenswrapper[17411]: I0223 13:13:56.518235 17411 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="aab74ca70685126f8898c1a27065ea70c7d1d230ea4b10b604c9d038a279487c" exitCode=1 Feb 23 13:13:56.518447 master-0 kubenswrapper[17411]: I0223 13:13:56.518305 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"aab74ca70685126f8898c1a27065ea70c7d1d230ea4b10b604c9d038a279487c"} Feb 23 13:13:56.518447 master-0 kubenswrapper[17411]: I0223 13:13:56.518376 17411 scope.go:117] "RemoveContainer" containerID="65c1fff907a886de0c20ba50f90af4df31705ea1e7b38b4684f430c20bbd2c46" Feb 23 13:13:56.519239 master-0 kubenswrapper[17411]: I0223 13:13:56.519161 17411 scope.go:117] "RemoveContainer" containerID="aab74ca70685126f8898c1a27065ea70c7d1d230ea4b10b604c9d038a279487c" Feb 23 13:13:56.562646 master-0 kubenswrapper[17411]: E0223 13:13:56.562566 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:13:57.534326 master-0 kubenswrapper[17411]: I0223 13:13:57.534213 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/2.log" Feb 23 13:13:57.535357 master-0 kubenswrapper[17411]: I0223 13:13:57.535046 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609"} Feb 23 13:13:57.540661 master-0 kubenswrapper[17411]: I0223 13:13:57.540594 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:13:57.540945 master-0 kubenswrapper[17411]: I0223 13:13:57.540678 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a"} Feb 23 13:13:59.139926 master-0 kubenswrapper[17411]: E0223 13:13:59.139834 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:02.581559 master-0 kubenswrapper[17411]: I0223 13:14:02.581478 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/3.log" Feb 23 13:14:02.582699 master-0 kubenswrapper[17411]: I0223 13:14:02.581996 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/2.log" Feb 23 13:14:02.582699 master-0 kubenswrapper[17411]: I0223 13:14:02.582037 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a" exitCode=1 Feb 23 13:14:02.582699 master-0 kubenswrapper[17411]: I0223 13:14:02.582069 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a"} Feb 23 13:14:02.582699 master-0 kubenswrapper[17411]: I0223 13:14:02.582108 17411 scope.go:117] "RemoveContainer" containerID="fdf69ec24e1c6086e49f484fb8b8dd94cca3653e3ce3d1c63357917cb9333952" Feb 23 13:14:02.583004 master-0 kubenswrapper[17411]: I0223 13:14:02.582833 17411 scope.go:117] "RemoveContainer" containerID="654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a" Feb 23 13:14:02.583352 master-0 kubenswrapper[17411]: E0223 13:14:02.583288 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:14:03.591223 master-0 kubenswrapper[17411]: I0223 13:14:03.591170 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/3.log" Feb 23 13:14:05.937552 master-0 kubenswrapper[17411]: I0223 13:14:05.937428 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:05.938584 master-0 kubenswrapper[17411]: I0223 13:14:05.938243 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:06.622547 master-0 kubenswrapper[17411]: I0223 13:14:06.622424 17411 generic.go:334] "Generic (PLEG): container finished" podID="b4c51b25-f013-4f5c-acbd-598350468192" containerID="95e4d714a5b0e16564b86ea287bf522f1be8abd96b5a27e8ec1dc65852f2bbda" exitCode=0 Feb 23 13:14:06.623949 master-0 kubenswrapper[17411]: I0223 13:14:06.623892 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerDied","Data":"95e4d714a5b0e16564b86ea287bf522f1be8abd96b5a27e8ec1dc65852f2bbda"} Feb 23 13:14:06.624069 master-0 kubenswrapper[17411]: I0223 13:14:06.623961 17411 scope.go:117] "RemoveContainer" containerID="c7825c24449084470222f141223b142962350c867bc7733a06b6b459b6dc7405" Feb 23 13:14:06.624714 master-0 kubenswrapper[17411]: I0223 13:14:06.624664 17411 scope.go:117] "RemoveContainer" containerID="95e4d714a5b0e16564b86ea287bf522f1be8abd96b5a27e8ec1dc65852f2bbda" Feb 23 13:14:07.633938 master-0 kubenswrapper[17411]: I0223 13:14:07.633842 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-8mw8h" event={"ID":"b4c51b25-f013-4f5c-acbd-598350468192","Type":"ContainerStarted","Data":"37e0bedd8f483d33f31c72c6675f752a5dd5ab126687ef0a53a319e1efbfeffc"} Feb 23 13:14:08.939073 master-0 kubenswrapper[17411]: I0223 13:14:08.938963 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:08.940092 master-0 kubenswrapper[17411]: I0223 13:14:08.939075 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:09.141051 master-0 kubenswrapper[17411]: E0223 13:14:09.140944 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:09.141051 master-0 kubenswrapper[17411]: E0223 13:14:09.141027 17411 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:14:09.651500 master-0 kubenswrapper[17411]: I0223 13:14:09.651311 17411 generic.go:334] "Generic (PLEG): container finished" podID="bfa537d0-11d0-4e8d-8b0e-bd5959f586f4" containerID="d7cecdb78483464ca842eef33778c826aa1ed5cf76ce100a4441589d8e22de94" exitCode=0 Feb 23 13:14:09.651895 master-0 kubenswrapper[17411]: I0223 13:14:09.651825 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" event={"ID":"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4","Type":"ContainerDied","Data":"d7cecdb78483464ca842eef33778c826aa1ed5cf76ce100a4441589d8e22de94"} Feb 23 13:14:09.652927 master-0 kubenswrapper[17411]: I0223 13:14:09.652891 17411 scope.go:117] "RemoveContainer" containerID="d7cecdb78483464ca842eef33778c826aa1ed5cf76ce100a4441589d8e22de94" Feb 23 13:14:10.665181 master-0 kubenswrapper[17411]: I0223 13:14:10.665116 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" event={"ID":"bfa537d0-11d0-4e8d-8b0e-bd5959f586f4","Type":"ContainerStarted","Data":"5361e43d6d4246ae6d57f49f3f09d4ed7900bd1feb4fd20073c11ad19b8f06de"} Feb 23 13:14:10.666734 master-0 kubenswrapper[17411]: I0223 13:14:10.666648 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:14:10.673582 master-0 kubenswrapper[17411]: I0223 13:14:10.673484 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65d5554fbd-fw5c9" Feb 23 13:14:13.564079 master-0 kubenswrapper[17411]: E0223 13:14:13.563959 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:14:15.567815 master-0 kubenswrapper[17411]: E0223 13:14:15.567624 17411 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-w868k.1896e2436cea6b62 openshift-multus 14061 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-w868k,UID:e3516f78-36c2-4b5e-a265-96eb305235f9,APIVersion:v1,ResourceVersion:13976,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:11:22 +0000 UTC,LastTimestamp:2026-02-23 13:11:32.630550347 +0000 UTC m=+286.058056944,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:14:15.870364 master-0 kubenswrapper[17411]: I0223 13:14:15.870162 17411 scope.go:117] "RemoveContainer" containerID="654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a" Feb 23 13:14:16.718033 master-0 kubenswrapper[17411]: I0223 13:14:16.717964 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/3.log" Feb 23 13:14:16.718774 master-0 kubenswrapper[17411]: I0223 13:14:16.718044 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5"} Feb 23 13:14:17.318695 master-0 kubenswrapper[17411]: E0223 13:14:17.318635 17411 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 13:14:17.318695 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-operator-5df5ffc47c-zwmzz_openshift-console-operator_679fabb5-a261-402e-b5be-8fe7f0da0ec8_0(1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608): error adding pod openshift-console-operator_console-operator-5df5ffc47c-zwmzz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608" Netns:"/var/run/netns/e559a834-56d5-4fe6-83c6-f302c581c77b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5df5ffc47c-zwmzz;K8S_POD_INFRA_CONTAINER_ID=1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608;K8S_POD_UID=679fabb5-a261-402e-b5be-8fe7f0da0ec8" Path:"" ERRORED: error configuring pod [openshift-console-operator/console-operator-5df5ffc47c-zwmzz] networking: Multus: [openshift-console-operator/console-operator-5df5ffc47c-zwmzz/679fabb5-a261-402e-b5be-8fe7f0da0ec8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: SetNetworkStatus: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:14:17.318695 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:14:17.318695 master-0 kubenswrapper[17411]: > Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: E0223 13:14:17.318730 17411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-operator-5df5ffc47c-zwmzz_openshift-console-operator_679fabb5-a261-402e-b5be-8fe7f0da0ec8_0(1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608): error adding pod openshift-console-operator_console-operator-5df5ffc47c-zwmzz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608" Netns:"/var/run/netns/e559a834-56d5-4fe6-83c6-f302c581c77b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5df5ffc47c-zwmzz;K8S_POD_INFRA_CONTAINER_ID=1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608;K8S_POD_UID=679fabb5-a261-402e-b5be-8fe7f0da0ec8" Path:"" ERRORED: error configuring pod [openshift-console-operator/console-operator-5df5ffc47c-zwmzz] networking: Multus: [openshift-console-operator/console-operator-5df5ffc47c-zwmzz/679fabb5-a261-402e-b5be-8fe7f0da0ec8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: SetNetworkStatus: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: > pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: E0223 13:14:17.318763 17411 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-operator-5df5ffc47c-zwmzz_openshift-console-operator_679fabb5-a261-402e-b5be-8fe7f0da0ec8_0(1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608): error adding pod openshift-console-operator_console-operator-5df5ffc47c-zwmzz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608" Netns:"/var/run/netns/e559a834-56d5-4fe6-83c6-f302c581c77b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5df5ffc47c-zwmzz;K8S_POD_INFRA_CONTAINER_ID=1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608;K8S_POD_UID=679fabb5-a261-402e-b5be-8fe7f0da0ec8" Path:"" ERRORED: error configuring pod [openshift-console-operator/console-operator-5df5ffc47c-zwmzz] networking: Multus: [openshift-console-operator/console-operator-5df5ffc47c-zwmzz/679fabb5-a261-402e-b5be-8fe7f0da0ec8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: SetNetworkStatus: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 13:14:17.318869 master-0 kubenswrapper[17411]: > pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:14:17.319096 master-0 kubenswrapper[17411]: E0223 13:14:17.318837 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-operator-5df5ffc47c-zwmzz_openshift-console-operator(679fabb5-a261-402e-b5be-8fe7f0da0ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-operator-5df5ffc47c-zwmzz_openshift-console-operator(679fabb5-a261-402e-b5be-8fe7f0da0ec8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-operator-5df5ffc47c-zwmzz_openshift-console-operator_679fabb5-a261-402e-b5be-8fe7f0da0ec8_0(1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608): error adding pod openshift-console-operator_console-operator-5df5ffc47c-zwmzz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608\\\" Netns:\\\"/var/run/netns/e559a834-56d5-4fe6-83c6-f302c581c77b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5df5ffc47c-zwmzz;K8S_POD_INFRA_CONTAINER_ID=1c735cb89e630f994143e84430db5a04484d8280f399d29460aa865bcf69c608;K8S_POD_UID=679fabb5-a261-402e-b5be-8fe7f0da0ec8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console-operator/console-operator-5df5ffc47c-zwmzz] networking: Multus: [openshift-console-operator/console-operator-5df5ffc47c-zwmzz/679fabb5-a261-402e-b5be-8fe7f0da0ec8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: SetNetworkStatus: failed to update the pod console-operator-5df5ffc47c-zwmzz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" Feb 23 13:14:17.723337 master-0 kubenswrapper[17411]: I0223 13:14:17.723283 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:14:17.724019 master-0 kubenswrapper[17411]: I0223 13:14:17.723728 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:14:18.938153 master-0 kubenswrapper[17411]: I0223 13:14:18.938046 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:18.938798 master-0 kubenswrapper[17411]: I0223 13:14:18.938152 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:21.407760 master-0 kubenswrapper[17411]: E0223 13:14:21.407661 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:21.767144 master-0 kubenswrapper[17411]: I0223 13:14:21.767039 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"799b125cfa6a69cad71d84c0c722af39b9b8e61f9ef241934052244a54856ad6"} Feb 23 13:14:22.782830 master-0 kubenswrapper[17411]: I0223 13:14:22.782686 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"19e3c963f3e6d5d375cbd9d65c07439a5640f3f743bc5385f8796e4d637cc007"} Feb 23 13:14:22.782830 master-0 kubenswrapper[17411]: I0223 13:14:22.782779 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"5efa4304bd9e1be86b105d28ec8c70dba2472293df3d2f2805e02b42c41c7d0e"} Feb 23 13:14:22.782830 master-0 kubenswrapper[17411]: I0223 13:14:22.782801 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"e9106c9a2c451e063449fbc959d068afeedaf28598aafb7ebcb96666adbf00ee"} Feb 23 13:14:23.796283 master-0 kubenswrapper[17411]: I0223 13:14:23.796164 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3550859fb001c9673eae69e16483efbabda46a8e810c217a737ec9c2b0293d6a"} Feb 23 13:14:23.797396 master-0 kubenswrapper[17411]: I0223 13:14:23.796877 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:14:23.797396 master-0 kubenswrapper[17411]: I0223 13:14:23.796924 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2075c4ad-56e8-474c-8a4e-7bdea9d28c0b" Feb 23 13:14:26.896552 master-0 kubenswrapper[17411]: I0223 13:14:26.896451 17411 status_manager.go:851] "Failed to get status for pod" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" pod="openshift-multus/cni-sysctl-allowlist-ds-w868k" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cni-sysctl-allowlist-ds-w868k)" Feb 23 13:14:26.899056 master-0 kubenswrapper[17411]: I0223 13:14:26.899011 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:26.899056 master-0 kubenswrapper[17411]: I0223 13:14:26.899051 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:27.166065 master-0 kubenswrapper[17411]: I0223 13:14:27.165881 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:48680->127.0.0.1:10357: read: connection reset by peer" start-of-body= Feb 23 13:14:27.166065 master-0 kubenswrapper[17411]: I0223 13:14:27.165993 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:48680->127.0.0.1:10357: read: connection reset by peer" Feb 23 13:14:27.166532 master-0 kubenswrapper[17411]: I0223 13:14:27.166106 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:27.167440 master-0 kubenswrapper[17411]: I0223 13:14:27.167390 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 13:14:27.167587 master-0 kubenswrapper[17411]: I0223 13:14:27.167548 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" containerID="cri-o://97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a" gracePeriod=30 Feb 23 13:14:27.829990 master-0 kubenswrapper[17411]: I0223 13:14:27.829919 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/1.log" Feb 23 13:14:27.833430 master-0 kubenswrapper[17411]: I0223 13:14:27.833381 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:14:27.833506 master-0 kubenswrapper[17411]: I0223 13:14:27.833459 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a" exitCode=255 Feb 23 13:14:27.833566 master-0 kubenswrapper[17411]: I0223 13:14:27.833506 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a"} Feb 23 13:14:27.833676 master-0 kubenswrapper[17411]: I0223 13:14:27.833551 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3"} Feb 23 13:14:27.833743 master-0 kubenswrapper[17411]: I0223 13:14:27.833685 17411 scope.go:117] "RemoveContainer" containerID="b398a9f3c00c8a1ed9831c18d667495d4a0f74359778ab7ea6c74a83ae93e1ea" Feb 23 13:14:28.843770 master-0 kubenswrapper[17411]: I0223 13:14:28.843698 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/1.log" Feb 23 13:14:28.846696 master-0 kubenswrapper[17411]: I0223 13:14:28.846633 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:14:29.200921 master-0 kubenswrapper[17411]: E0223 13:14:29.200854 17411 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:14:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:14:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:14:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:14:19Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:30.565619 master-0 kubenswrapper[17411]: E0223 13:14:30.565511 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:14:34.638822 master-0 kubenswrapper[17411]: I0223 13:14:34.638764 17411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:34.650142 master-0 kubenswrapper[17411]: I0223 13:14:34.650102 17411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:14:34.657392 master-0 kubenswrapper[17411]: I0223 13:14:34.657351 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:14:34.669814 master-0 kubenswrapper[17411]: I0223 13:14:34.669767 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-zwmzz"] Feb 23 13:14:34.686674 master-0 kubenswrapper[17411]: W0223 13:14:34.686586 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e437b4_e6fd_482f_91a2_f48b9f087321.slice/crio-50c7ec1f5ca4265757a29bcd7bd1cb805b067e2d12981dec3cf9d22b61572c34 WatchSource:0}: Error finding container 50c7ec1f5ca4265757a29bcd7bd1cb805b067e2d12981dec3cf9d22b61572c34: Status 404 returned error can't find the container with id 50c7ec1f5ca4265757a29bcd7bd1cb805b067e2d12981dec3cf9d22b61572c34 Feb 23 13:14:34.689589 master-0 kubenswrapper[17411]: I0223 13:14:34.689519 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:14:34.695486 master-0 kubenswrapper[17411]: I0223 13:14:34.695446 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:14:34.700857 master-0 kubenswrapper[17411]: I0223 13:14:34.700803 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:14:34.707077 master-0 kubenswrapper[17411]: I0223 13:14:34.706884 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:14:34.713527 master-0 kubenswrapper[17411]: I0223 13:14:34.713462 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-8hstp"] Feb 23 13:14:34.775341 master-0 kubenswrapper[17411]: I0223 13:14:34.770401 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:14:34.775341 master-0 kubenswrapper[17411]: I0223 13:14:34.775269 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 23 13:14:34.853936 master-0 kubenswrapper[17411]: I0223 13:14:34.853859 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w868k"] Feb 23 13:14:34.859572 master-0 kubenswrapper[17411]: I0223 13:14:34.859498 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w868k"] Feb 23 13:14:34.879017 master-0 kubenswrapper[17411]: I0223 13:14:34.878956 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b07d33-6e84-434e-9a14-431846620968" path="/var/lib/kubelet/pods/44b07d33-6e84-434e-9a14-431846620968/volumes" Feb 23 13:14:34.879674 master-0 kubenswrapper[17411]: I0223 13:14:34.879638 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" path="/var/lib/kubelet/pods/72fb1770-7d0c-4c92-9f0b-3139f27510ca/volumes" Feb 23 13:14:34.880332 master-0 kubenswrapper[17411]: I0223 13:14:34.880299 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" path="/var/lib/kubelet/pods/e3516f78-36c2-4b5e-a265-96eb305235f9/volumes" Feb 23 13:14:34.907730 master-0 kubenswrapper[17411]: I0223 13:14:34.907585 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"50c7ec1f5ca4265757a29bcd7bd1cb805b067e2d12981dec3cf9d22b61572c34"} Feb 23 13:14:34.910898 master-0 kubenswrapper[17411]: I0223 13:14:34.909333 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"4f102ccecc8dc7fd8bbe326491c89479fb7c41d58c24721eefd8447bb566149b"} Feb 23 13:14:34.912301 master-0 kubenswrapper[17411]: I0223 13:14:34.912267 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"fcfc88379baf23d7a87fa2f79e200ec61bdbcac138e571974b4701a1640fa7af"} Feb 23 13:14:35.938127 master-0 kubenswrapper[17411]: I0223 13:14:35.938084 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:35.939712 master-0 kubenswrapper[17411]: I0223 13:14:35.939660 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:36.937038 master-0 kubenswrapper[17411]: I0223 13:14:36.935172 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="1e2c8bf2649bb83ebb59ccefe68f87d1cbf2774db7c0e989383bc2b02c2dea7b" exitCode=0 Feb 23 13:14:36.937038 master-0 kubenswrapper[17411]: I0223 13:14:36.935388 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"1e2c8bf2649bb83ebb59ccefe68f87d1cbf2774db7c0e989383bc2b02c2dea7b"} Feb 23 13:14:36.941791 master-0 kubenswrapper[17411]: I0223 13:14:36.941736 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0" exitCode=0 Feb 23 13:14:36.944165 master-0 kubenswrapper[17411]: I0223 13:14:36.943309 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0"} Feb 23 13:14:36.944165 master-0 kubenswrapper[17411]: I0223 13:14:36.943865 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:37.957062 master-0 kubenswrapper[17411]: I0223 13:14:37.956992 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4"} Feb 23 13:14:37.957666 master-0 kubenswrapper[17411]: I0223 13:14:37.957526 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:14:37.985055 master-0 kubenswrapper[17411]: I0223 13:14:37.984966 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podStartSLOduration=451.056672238 podStartE2EDuration="7m33.984944887s" podCreationTimestamp="2026-02-23 13:07:04 +0000 UTC" firstStartedPulling="2026-02-23 13:14:34.686642835 +0000 UTC m=+468.114149452" lastFinishedPulling="2026-02-23 13:14:37.614915504 +0000 UTC m=+471.042422101" observedRunningTime="2026-02-23 13:14:37.981789369 +0000 UTC m=+471.409295966" watchObservedRunningTime="2026-02-23 13:14:37.984944887 +0000 UTC m=+471.412451484" Feb 23 13:14:38.937915 master-0 kubenswrapper[17411]: I0223 13:14:38.937838 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:38.938083 master-0 kubenswrapper[17411]: I0223 13:14:38.937930 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:38.958039 master-0 kubenswrapper[17411]: I0223 13:14:38.957959 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:38.958692 master-0 kubenswrapper[17411]: I0223 13:14:38.958039 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:38.971378 master-0 kubenswrapper[17411]: I0223 13:14:38.971311 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd"} Feb 23 13:14:39.971640 master-0 kubenswrapper[17411]: I0223 13:14:39.971560 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:39.971640 master-0 kubenswrapper[17411]: I0223 13:14:39.971637 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:41.006642 master-0 kubenswrapper[17411]: I0223 13:14:41.006580 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6"} Feb 23 13:14:41.924719 master-0 kubenswrapper[17411]: I0223 13:14:41.924656 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:42.031712 master-0 kubenswrapper[17411]: I0223 13:14:42.031656 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"6724674d6284fdd05381b7d0daef8a39a226e4c324110414bcbc6793e5bd3d5f"} Feb 23 13:14:42.031712 master-0 kubenswrapper[17411]: I0223 13:14:42.031714 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"7355347876eb6f26645282da59b2039fa5f5bf7c99724e7e85490f25fa53bd9d"} Feb 23 13:14:42.031712 master-0 kubenswrapper[17411]: I0223 13:14:42.031729 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"379c5ee6081cf25cf74b27ad60c344645f271de34631a4c85b7eae36a346bc1d"} Feb 23 13:14:42.032512 master-0 kubenswrapper[17411]: I0223 13:14:42.031741 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"df3847509227b18cfa2057df9af88aeb7bbc0404ce6befb7751bd3e07fced95b"} Feb 23 13:14:42.037286 master-0 kubenswrapper[17411]: I0223 13:14:42.037234 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183"} Feb 23 13:14:42.037398 master-0 kubenswrapper[17411]: I0223 13:14:42.037293 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee"} Feb 23 13:14:42.037398 master-0 kubenswrapper[17411]: I0223 13:14:42.037305 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916"} Feb 23 13:14:42.037398 master-0 kubenswrapper[17411]: I0223 13:14:42.037314 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerStarted","Data":"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118"} Feb 23 13:14:42.103312 master-0 kubenswrapper[17411]: I0223 13:14:42.103228 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=431.057474946 podStartE2EDuration="7m15.103215801s" podCreationTimestamp="2026-02-23 13:07:27 +0000 UTC" firstStartedPulling="2026-02-23 13:14:34.688265101 +0000 UTC m=+468.115771698" lastFinishedPulling="2026-02-23 13:14:38.734005956 +0000 UTC m=+472.161512553" observedRunningTime="2026-02-23 13:14:42.101770091 +0000 UTC m=+475.529276688" watchObservedRunningTime="2026-02-23 13:14:42.103215801 +0000 UTC m=+475.530722398" Feb 23 13:14:42.741800 master-0 kubenswrapper[17411]: I0223 13:14:42.741733 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 23 13:14:43.048229 master-0 kubenswrapper[17411]: I0223 13:14:43.048037 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"21d478f17ab841facc6af3c11882e409ca6a5733c3567c73c296122b45bd2178"} Feb 23 13:14:43.048229 master-0 kubenswrapper[17411]: I0223 13:14:43.048127 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerStarted","Data":"848dd18f30dfd1e2f1024adae59eb6e05998671f920f766e813b8325be190abb"} Feb 23 13:14:43.088119 master-0 kubenswrapper[17411]: I0223 13:14:43.088006 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:14:43.088470 master-0 kubenswrapper[17411]: I0223 13:14:43.088171 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:14:43.138799 master-0 kubenswrapper[17411]: I0223 13:14:43.138730 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:14:43.378867 master-0 kubenswrapper[17411]: E0223 13:14:43.378692 17411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 23 13:14:43.384823 master-0 kubenswrapper[17411]: I0223 13:14:43.384749 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=425.174793459 podStartE2EDuration="7m11.384730561s" podCreationTimestamp="2026-02-23 13:07:32 +0000 UTC" firstStartedPulling="2026-02-23 13:14:34.649981095 +0000 UTC m=+468.077487692" lastFinishedPulling="2026-02-23 13:14:40.859918187 +0000 UTC m=+474.287424794" observedRunningTime="2026-02-23 13:14:43.381064248 +0000 UTC m=+476.808570905" watchObservedRunningTime="2026-02-23 13:14:43.384730561 +0000 UTC m=+476.812237158" Feb 23 13:14:43.432435 master-0 kubenswrapper[17411]: I0223 13:14:43.432334 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.4323117490000001 podStartE2EDuration="1.432311749s" podCreationTimestamp="2026-02-23 13:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:43.423663266 +0000 UTC m=+476.851169893" watchObservedRunningTime="2026-02-23 13:14:43.432311749 +0000 UTC m=+476.859818356" Feb 23 13:14:44.110112 master-0 kubenswrapper[17411]: I0223 13:14:44.110051 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:14:47.093694 master-0 kubenswrapper[17411]: I0223 13:14:47.093605 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/4.log" Feb 23 13:14:47.094875 master-0 kubenswrapper[17411]: I0223 13:14:47.094480 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/3.log" Feb 23 13:14:47.094875 master-0 kubenswrapper[17411]: I0223 13:14:47.094563 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5" exitCode=1 Feb 23 13:14:47.094875 master-0 kubenswrapper[17411]: I0223 13:14:47.094608 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5"} Feb 23 13:14:47.094875 master-0 kubenswrapper[17411]: I0223 13:14:47.094661 17411 scope.go:117] "RemoveContainer" containerID="654b839ba70d24ce75c6c6573c01c8e43093b01864c80ec73e61d6789a8e902a" Feb 23 13:14:47.095746 master-0 kubenswrapper[17411]: I0223 13:14:47.095683 17411 scope.go:117] "RemoveContainer" containerID="89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5" Feb 23 13:14:47.096180 master-0 kubenswrapper[17411]: E0223 13:14:47.096115 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:14:47.555141 master-0 kubenswrapper[17411]: I0223 13:14:47.555011 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:47.555141 master-0 kubenswrapper[17411]: I0223 13:14:47.555065 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:47.555141 master-0 kubenswrapper[17411]: I0223 13:14:47.555106 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:47.555141 master-0 kubenswrapper[17411]: I0223 13:14:47.555129 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:47.566924 master-0 kubenswrapper[17411]: E0223 13:14:47.566828 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:14:48.945102 master-0 kubenswrapper[17411]: I0223 13:14:48.945003 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:48.945102 master-0 kubenswrapper[17411]: I0223 13:14:48.945090 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:49.114053 master-0 kubenswrapper[17411]: I0223 13:14:49.113972 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/4.log" Feb 23 13:14:50.435971 master-0 kubenswrapper[17411]: I0223 13:14:50.435891 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436219 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="multus-admission-controller" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436233 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="multus-admission-controller" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436269 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436275 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436290 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="kube-rbac-proxy" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436296 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="kube-rbac-proxy" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436315 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0e0f7e-b725-4aae-8180-024b699386d5" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436322 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0e0f7e-b725-4aae-8180-024b699386d5" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436335 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436340 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: E0223 13:14:50.436352 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382f96d2-f66c-4adc-9b6d-4ed63124da89" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436358 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="382f96d2-f66c-4adc-9b6d-4ed63124da89" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436475 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="72fb1770-7d0c-4c92-9f0b-3139f27510ca" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436518 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="kube-rbac-proxy" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436525 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="382f96d2-f66c-4adc-9b6d-4ed63124da89" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436557 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3516f78-36c2-4b5e-a265-96eb305235f9" containerName="kube-multus-additional-cni-plugins" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436569 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0e0f7e-b725-4aae-8180-024b699386d5" containerName="installer" Feb 23 13:14:50.436776 master-0 kubenswrapper[17411]: I0223 13:14:50.436584 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b07d33-6e84-434e-9a14-431846620968" containerName="multus-admission-controller" Feb 23 13:14:50.437785 master-0 kubenswrapper[17411]: I0223 13:14:50.437155 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.440587 master-0 kubenswrapper[17411]: I0223 13:14:50.440538 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-q2chk" Feb 23 13:14:50.440833 master-0 kubenswrapper[17411]: I0223 13:14:50.440799 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:14:50.554093 master-0 kubenswrapper[17411]: I0223 13:14:50.553997 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.554093 master-0 kubenswrapper[17411]: I0223 13:14:50.554090 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.554546 master-0 kubenswrapper[17411]: I0223 13:14:50.554127 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.584314 master-0 kubenswrapper[17411]: I0223 13:14:50.584195 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Feb 23 13:14:50.656069 master-0 kubenswrapper[17411]: I0223 13:14:50.655996 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.656299 master-0 kubenswrapper[17411]: I0223 13:14:50.656113 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.656299 master-0 kubenswrapper[17411]: I0223 13:14:50.656187 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.656488 master-0 kubenswrapper[17411]: I0223 13:14:50.656441 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.656531 master-0 kubenswrapper[17411]: I0223 13:14:50.656514 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.700996 master-0 kubenswrapper[17411]: I0223 13:14:50.700941 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:50.760771 master-0 kubenswrapper[17411]: I0223 13:14:50.760499 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:14:51.319893 master-0 kubenswrapper[17411]: I0223 13:14:51.319817 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Feb 23 13:14:51.324140 master-0 kubenswrapper[17411]: W0223 13:14:51.324073 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23f6e482_8da1_4df0_8de6_66a930e45a20.slice/crio-b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57 WatchSource:0}: Error finding container b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57: Status 404 returned error can't find the container with id b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57 Feb 23 13:14:52.142877 master-0 kubenswrapper[17411]: I0223 13:14:52.142780 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"23f6e482-8da1-4df0-8de6-66a930e45a20","Type":"ContainerStarted","Data":"7e430dd00f0a0105863d8293fdc97c4fe96bc4ed6b8ff010a52f450aad23346b"} Feb 23 13:14:52.142877 master-0 kubenswrapper[17411]: I0223 13:14:52.142864 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"23f6e482-8da1-4df0-8de6-66a930e45a20","Type":"ContainerStarted","Data":"b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57"} Feb 23 13:14:52.191686 master-0 kubenswrapper[17411]: I0223 13:14:52.191573 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" podStartSLOduration=2.191544648 podStartE2EDuration="2.191544648s" podCreationTimestamp="2026-02-23 13:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:52.190875689 +0000 UTC m=+485.618382286" watchObservedRunningTime="2026-02-23 13:14:52.191544648 +0000 UTC m=+485.619051255" Feb 23 13:14:57.190995 master-0 kubenswrapper[17411]: I0223 13:14:57.190945 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/3.log" Feb 23 13:14:57.191550 master-0 kubenswrapper[17411]: I0223 13:14:57.191525 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/2.log" Feb 23 13:14:57.191924 master-0 kubenswrapper[17411]: I0223 13:14:57.191893 17411 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609" exitCode=1 Feb 23 13:14:57.191970 master-0 kubenswrapper[17411]: I0223 13:14:57.191933 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609"} Feb 23 13:14:57.192003 master-0 kubenswrapper[17411]: I0223 13:14:57.191967 17411 scope.go:117] "RemoveContainer" containerID="aab74ca70685126f8898c1a27065ea70c7d1d230ea4b10b604c9d038a279487c" Feb 23 13:14:57.192554 master-0 kubenswrapper[17411]: I0223 13:14:57.192533 17411 scope.go:117] "RemoveContainer" containerID="09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609" Feb 23 13:14:57.192781 master-0 kubenswrapper[17411]: E0223 13:14:57.192760 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:14:57.555160 master-0 kubenswrapper[17411]: I0223 13:14:57.554920 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:57.555160 master-0 kubenswrapper[17411]: I0223 13:14:57.555025 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:57.555674 master-0 kubenswrapper[17411]: I0223 13:14:57.555394 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:14:57.556400 master-0 kubenswrapper[17411]: I0223 13:14:57.556336 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:14:57.869582 master-0 kubenswrapper[17411]: I0223 13:14:57.869392 17411 scope.go:117] "RemoveContainer" containerID="89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5" Feb 23 13:14:57.869876 master-0 kubenswrapper[17411]: E0223 13:14:57.869770 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:14:58.026023 master-0 kubenswrapper[17411]: I0223 13:14:58.025914 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59720->127.0.0.1:10357: read: connection reset by peer" start-of-body= Feb 23 13:14:58.026410 master-0 kubenswrapper[17411]: I0223 13:14:58.026015 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59720->127.0.0.1:10357: read: connection reset by peer" Feb 23 13:14:58.026410 master-0 kubenswrapper[17411]: I0223 13:14:58.026179 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:14:58.028052 master-0 kubenswrapper[17411]: I0223 13:14:58.027109 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 13:14:58.028052 master-0 kubenswrapper[17411]: I0223 13:14:58.027227 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" containerID="cri-o://fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3" gracePeriod=30 Feb 23 13:14:58.203478 master-0 kubenswrapper[17411]: I0223 13:14:58.203406 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/2.log" Feb 23 13:14:58.204315 master-0 kubenswrapper[17411]: I0223 13:14:58.204093 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/1.log" Feb 23 13:14:58.206347 master-0 kubenswrapper[17411]: I0223 13:14:58.206308 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:14:58.206424 master-0 kubenswrapper[17411]: I0223 13:14:58.206363 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3" exitCode=255 Feb 23 13:14:58.206527 master-0 kubenswrapper[17411]: I0223 13:14:58.206453 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3"} Feb 23 13:14:58.206600 master-0 kubenswrapper[17411]: I0223 13:14:58.206577 17411 scope.go:117] "RemoveContainer" containerID="97003bb78df37a7cac3fa631bd2a6f35a53659f4aa32d9c08a4a9ec06a82442a" Feb 23 13:14:58.209369 master-0 kubenswrapper[17411]: I0223 13:14:58.209332 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/3.log" Feb 23 13:14:59.221030 master-0 kubenswrapper[17411]: I0223 13:14:59.220969 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/2.log" Feb 23 13:14:59.222915 master-0 kubenswrapper[17411]: I0223 13:14:59.222873 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:14:59.223009 master-0 kubenswrapper[17411]: I0223 13:14:59.222958 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc"} Feb 23 13:15:04.568447 master-0 kubenswrapper[17411]: E0223 13:15:04.568363 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:15:05.937317 master-0 kubenswrapper[17411]: I0223 13:15:05.937163 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:15:05.937317 master-0 kubenswrapper[17411]: I0223 13:15:05.937311 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:15:07.554695 master-0 kubenswrapper[17411]: I0223 13:15:07.554564 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:07.554695 master-0 kubenswrapper[17411]: I0223 13:15:07.554670 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:07.555906 master-0 kubenswrapper[17411]: I0223 13:15:07.554679 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:07.555906 master-0 kubenswrapper[17411]: I0223 13:15:07.554784 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:07.555906 master-0 kubenswrapper[17411]: I0223 13:15:07.555020 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:15:07.556158 master-0 kubenswrapper[17411]: I0223 13:15:07.556010 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4"} pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 23 13:15:07.556158 master-0 kubenswrapper[17411]: I0223 13:15:07.556061 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" containerID="cri-o://d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4" gracePeriod=30 Feb 23 13:15:07.575387 master-0 kubenswrapper[17411]: I0223 13:15:07.575284 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:44434->10.128.0.77:8443: read: connection reset by peer" start-of-body= Feb 23 13:15:07.575601 master-0 kubenswrapper[17411]: I0223 13:15:07.575385 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:44434->10.128.0.77:8443: read: connection reset by peer" Feb 23 13:15:08.312069 master-0 kubenswrapper[17411]: I0223 13:15:08.311982 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/0.log" Feb 23 13:15:08.312069 master-0 kubenswrapper[17411]: I0223 13:15:08.312059 17411 generic.go:334] "Generic (PLEG): container finished" podID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerID="d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4" exitCode=255 Feb 23 13:15:08.312069 master-0 kubenswrapper[17411]: I0223 13:15:08.312102 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerDied","Data":"d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4"} Feb 23 13:15:08.312442 master-0 kubenswrapper[17411]: I0223 13:15:08.312133 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621"} Feb 23 13:15:08.312442 master-0 kubenswrapper[17411]: I0223 13:15:08.312370 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:15:08.937619 master-0 kubenswrapper[17411]: I0223 13:15:08.937505 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:08.937619 master-0 kubenswrapper[17411]: I0223 13:15:08.937602 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:09.312861 master-0 kubenswrapper[17411]: I0223 13:15:09.312789 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:09.313102 master-0 kubenswrapper[17411]: I0223 13:15:09.312862 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:10.321162 master-0 kubenswrapper[17411]: I0223 13:15:10.321051 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:10.322142 master-0 kubenswrapper[17411]: I0223 13:15:10.321184 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:10.869022 master-0 kubenswrapper[17411]: I0223 13:15:10.868930 17411 scope.go:117] "RemoveContainer" containerID="09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609" Feb 23 13:15:11.352068 master-0 kubenswrapper[17411]: I0223 13:15:11.351985 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/3.log" Feb 23 13:15:11.353390 master-0 kubenswrapper[17411]: I0223 13:15:11.352710 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a"} Feb 23 13:15:12.869615 master-0 kubenswrapper[17411]: I0223 13:15:12.869541 17411 scope.go:117] "RemoveContainer" containerID="89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5" Feb 23 13:15:13.369542 master-0 kubenswrapper[17411]: I0223 13:15:13.369488 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/4.log" Feb 23 13:15:13.369542 master-0 kubenswrapper[17411]: I0223 13:15:13.369543 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe"} Feb 23 13:15:17.554927 master-0 kubenswrapper[17411]: I0223 13:15:17.554812 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:17.556292 master-0 kubenswrapper[17411]: I0223 13:15:17.554932 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:17.556292 master-0 kubenswrapper[17411]: I0223 13:15:17.554951 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:17.556292 master-0 kubenswrapper[17411]: I0223 13:15:17.555059 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:18.937813 master-0 kubenswrapper[17411]: I0223 13:15:18.937654 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:18.938800 master-0 kubenswrapper[17411]: I0223 13:15:18.937819 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:21.570621 master-0 kubenswrapper[17411]: E0223 13:15:21.570457 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:15:27.555969 master-0 kubenswrapper[17411]: I0223 13:15:27.555706 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:27.555969 master-0 kubenswrapper[17411]: I0223 13:15:27.555909 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:27.557374 master-0 kubenswrapper[17411]: I0223 13:15:27.555756 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:27.557374 master-0 kubenswrapper[17411]: I0223 13:15:27.556432 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:28.641217 master-0 kubenswrapper[17411]: I0223 13:15:28.641121 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:47966->127.0.0.1:10357: read: connection reset by peer" start-of-body= Feb 23 13:15:28.642523 master-0 kubenswrapper[17411]: I0223 13:15:28.641235 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:47966->127.0.0.1:10357: read: connection reset by peer" Feb 23 13:15:28.642523 master-0 kubenswrapper[17411]: I0223 13:15:28.641370 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:15:28.642912 master-0 kubenswrapper[17411]: I0223 13:15:28.642836 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 13:15:28.643121 master-0 kubenswrapper[17411]: I0223 13:15:28.643057 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" containerID="cri-o://e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" gracePeriod=30 Feb 23 13:15:29.166408 master-0 kubenswrapper[17411]: E0223 13:15:29.166339 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(38b7ce474df02ea287eb02ea513a627a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:15:29.514745 master-0 kubenswrapper[17411]: I0223 13:15:29.514689 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:15:29.516187 master-0 kubenswrapper[17411]: I0223 13:15:29.516135 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/2.log" Feb 23 13:15:29.519765 master-0 kubenswrapper[17411]: I0223 13:15:29.519702 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:15:29.519931 master-0 kubenswrapper[17411]: I0223 13:15:29.519778 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" exitCode=255 Feb 23 13:15:29.519931 master-0 kubenswrapper[17411]: I0223 13:15:29.519829 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc"} Feb 23 13:15:29.519931 master-0 kubenswrapper[17411]: I0223 13:15:29.519887 17411 scope.go:117] "RemoveContainer" containerID="fc753d9e3c2d3c886c066424fb6affbf7cdff2f3e33327f6c7227c1a88592ae3" Feb 23 13:15:29.520789 master-0 kubenswrapper[17411]: I0223 13:15:29.520701 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:15:29.521179 master-0 kubenswrapper[17411]: E0223 13:15:29.521137 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(38b7ce474df02ea287eb02ea513a627a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:15:30.531781 master-0 kubenswrapper[17411]: I0223 13:15:30.531707 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:15:30.534030 master-0 kubenswrapper[17411]: I0223 13:15:30.533963 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:15:35.937066 master-0 kubenswrapper[17411]: I0223 13:15:35.936943 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:15:35.939637 master-0 kubenswrapper[17411]: I0223 13:15:35.938604 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:15:35.939637 master-0 kubenswrapper[17411]: E0223 13:15:35.939191 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(38b7ce474df02ea287eb02ea513a627a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:15:37.554821 master-0 kubenswrapper[17411]: I0223 13:15:37.554640 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:37.554821 master-0 kubenswrapper[17411]: I0223 13:15:37.554803 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:37.555939 master-0 kubenswrapper[17411]: I0223 13:15:37.554889 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:15:37.555939 master-0 kubenswrapper[17411]: I0223 13:15:37.554704 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:37.555939 master-0 kubenswrapper[17411]: I0223 13:15:37.555037 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:37.556710 master-0 kubenswrapper[17411]: I0223 13:15:37.556618 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621"} pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 23 13:15:37.556824 master-0 kubenswrapper[17411]: I0223 13:15:37.556737 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" containerID="cri-o://210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621" gracePeriod=30 Feb 23 13:15:37.573778 master-0 kubenswrapper[17411]: I0223 13:15:37.573711 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:60188->10.128.0.77:8443: read: connection reset by peer" start-of-body= Feb 23 13:15:37.573943 master-0 kubenswrapper[17411]: I0223 13:15:37.573785 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:60188->10.128.0.77:8443: read: connection reset by peer" Feb 23 13:15:38.572684 master-0 kubenswrapper[17411]: E0223 13:15:38.572514 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 23 13:15:38.608398 master-0 kubenswrapper[17411]: I0223 13:15:38.607066 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/1.log" Feb 23 13:15:38.609032 master-0 kubenswrapper[17411]: I0223 13:15:38.608970 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/0.log" Feb 23 13:15:38.609207 master-0 kubenswrapper[17411]: I0223 13:15:38.609088 17411 generic.go:334] "Generic (PLEG): container finished" podID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerID="210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621" exitCode=255 Feb 23 13:15:38.609207 master-0 kubenswrapper[17411]: I0223 13:15:38.609162 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerDied","Data":"210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621"} Feb 23 13:15:38.609438 master-0 kubenswrapper[17411]: I0223 13:15:38.609227 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc"} Feb 23 13:15:38.609438 master-0 kubenswrapper[17411]: I0223 13:15:38.609295 17411 scope.go:117] "RemoveContainer" containerID="d5d96f1ccc99f0c2ba6bde8bbc99703aa13f3dff0a7f5689bb7825e07f78bde4" Feb 23 13:15:38.610205 master-0 kubenswrapper[17411]: I0223 13:15:38.610063 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:15:39.609932 master-0 kubenswrapper[17411]: I0223 13:15:39.609814 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:39.609932 master-0 kubenswrapper[17411]: I0223 13:15:39.609916 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:39.623223 master-0 kubenswrapper[17411]: I0223 13:15:39.623168 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/1.log" Feb 23 13:15:40.623994 master-0 kubenswrapper[17411]: I0223 13:15:40.623867 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:40.624945 master-0 kubenswrapper[17411]: I0223 13:15:40.623986 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:43.656312 master-0 kubenswrapper[17411]: I0223 13:15:43.656056 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/5.log" Feb 23 13:15:43.657287 master-0 kubenswrapper[17411]: I0223 13:15:43.656803 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/4.log" Feb 23 13:15:43.657287 master-0 kubenswrapper[17411]: I0223 13:15:43.656880 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" exitCode=1 Feb 23 13:15:43.657287 master-0 kubenswrapper[17411]: I0223 13:15:43.656946 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe"} Feb 23 13:15:43.657287 master-0 kubenswrapper[17411]: I0223 13:15:43.657023 17411 scope.go:117] "RemoveContainer" containerID="89c68aa1c52809c1469e6ffbd2eee04b300625fa0bdc28cc370e25fa90995cb5" Feb 23 13:15:43.658369 master-0 kubenswrapper[17411]: I0223 13:15:43.658004 17411 scope.go:117] "RemoveContainer" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" Feb 23 13:15:43.660312 master-0 kubenswrapper[17411]: E0223 13:15:43.660098 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:15:44.672915 master-0 kubenswrapper[17411]: I0223 13:15:44.672803 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/5.log" Feb 23 13:15:47.555159 master-0 kubenswrapper[17411]: I0223 13:15:47.555014 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:47.556386 master-0 kubenswrapper[17411]: I0223 13:15:47.555153 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:47.556386 master-0 kubenswrapper[17411]: I0223 13:15:47.555185 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:47.556386 master-0 kubenswrapper[17411]: I0223 13:15:47.555303 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:50.869457 master-0 kubenswrapper[17411]: I0223 13:15:50.869340 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:15:50.870303 master-0 kubenswrapper[17411]: E0223 13:15:50.869779 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(38b7ce474df02ea287eb02ea513a627a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:15:54.868725 master-0 kubenswrapper[17411]: I0223 13:15:54.868644 17411 scope.go:117] "RemoveContainer" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" Feb 23 13:15:54.869437 master-0 kubenswrapper[17411]: E0223 13:15:54.868875 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:15:55.574510 master-0 kubenswrapper[17411]: E0223 13:15:55.574399 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 23 13:15:57.554288 master-0 kubenswrapper[17411]: I0223 13:15:57.554112 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:57.554288 master-0 kubenswrapper[17411]: I0223 13:15:57.554180 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:15:57.555211 master-0 kubenswrapper[17411]: I0223 13:15:57.554333 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:15:57.555211 master-0 kubenswrapper[17411]: I0223 13:15:57.554310 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:00.824575 master-0 kubenswrapper[17411]: I0223 13:16:00.824417 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/0.log" Feb 23 13:16:00.824575 master-0 kubenswrapper[17411]: I0223 13:16:00.824492 17411 generic.go:334] "Generic (PLEG): container finished" podID="a3dfb271-a659-45e0-b51d-5e99ec43b555" containerID="351e4db24f64009fc4f824529f2660bb02ed2356f12336ec3301a4d672483590" exitCode=1 Feb 23 13:16:00.825377 master-0 kubenswrapper[17411]: I0223 13:16:00.824555 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerDied","Data":"351e4db24f64009fc4f824529f2660bb02ed2356f12336ec3301a4d672483590"} Feb 23 13:16:00.825626 master-0 kubenswrapper[17411]: I0223 13:16:00.825591 17411 scope.go:117] "RemoveContainer" containerID="351e4db24f64009fc4f824529f2660bb02ed2356f12336ec3301a4d672483590" Feb 23 13:16:00.826846 master-0 kubenswrapper[17411]: I0223 13:16:00.826618 17411 generic.go:334] "Generic (PLEG): container finished" podID="8a406f63-eeeb-4da3-a1d0-86b5ab5d802c" containerID="49cba424cf2c60e283525bde6160dccd693982c2542843d4d0587d31883af795" exitCode=0 Feb 23 13:16:00.826846 master-0 kubenswrapper[17411]: I0223 13:16:00.826685 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" event={"ID":"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c","Type":"ContainerDied","Data":"49cba424cf2c60e283525bde6160dccd693982c2542843d4d0587d31883af795"} Feb 23 13:16:00.827534 master-0 kubenswrapper[17411]: I0223 13:16:00.827488 17411 scope.go:117] "RemoveContainer" containerID="49cba424cf2c60e283525bde6160dccd693982c2542843d4d0587d31883af795" Feb 23 13:16:01.840386 master-0 kubenswrapper[17411]: I0223 13:16:01.840208 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-7rb6v" event={"ID":"8a406f63-eeeb-4da3-a1d0-86b5ab5d802c","Type":"ContainerStarted","Data":"233c704e5ce6513dd169cebca212139006a3b06759151c8c7dadbd5e4bba6c85"} Feb 23 13:16:01.843159 master-0 kubenswrapper[17411]: I0223 13:16:01.843113 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tzms_da5d5997-e45f-4858-a9a9-e880bc222caf/package-server-manager/0.log" Feb 23 13:16:01.843716 master-0 kubenswrapper[17411]: I0223 13:16:01.843671 17411 generic.go:334] "Generic (PLEG): container finished" podID="da5d5997-e45f-4858-a9a9-e880bc222caf" containerID="683cdc0fee6b544a3be498a634e1336632426f938865b51d36e3f4e04230192a" exitCode=1 Feb 23 13:16:01.843838 master-0 kubenswrapper[17411]: I0223 13:16:01.843763 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" event={"ID":"da5d5997-e45f-4858-a9a9-e880bc222caf","Type":"ContainerDied","Data":"683cdc0fee6b544a3be498a634e1336632426f938865b51d36e3f4e04230192a"} Feb 23 13:16:01.844959 master-0 kubenswrapper[17411]: I0223 13:16:01.844895 17411 scope.go:117] "RemoveContainer" containerID="683cdc0fee6b544a3be498a634e1336632426f938865b51d36e3f4e04230192a" Feb 23 13:16:01.848326 master-0 kubenswrapper[17411]: I0223 13:16:01.848225 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/0.log" Feb 23 13:16:01.848439 master-0 kubenswrapper[17411]: I0223 13:16:01.848325 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerStarted","Data":"edc1773c982d6063298896af34c17dae7d495b67e0652db28d6d5baf5d894ae5"} Feb 23 13:16:02.599909 master-0 kubenswrapper[17411]: I0223 13:16:02.599790 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:16:02.797685 master-0 kubenswrapper[17411]: I0223 13:16:02.797600 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:16:02.856546 master-0 kubenswrapper[17411]: I0223 13:16:02.856426 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-8tzms_da5d5997-e45f-4858-a9a9-e880bc222caf/package-server-manager/0.log" Feb 23 13:16:02.857018 master-0 kubenswrapper[17411]: I0223 13:16:02.856905 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" event={"ID":"da5d5997-e45f-4858-a9a9-e880bc222caf","Type":"ContainerStarted","Data":"f53610f72a49452d995c5bc8208c435eeda99546ee2412060dabc7189e718cb6"} Feb 23 13:16:02.857300 master-0 kubenswrapper[17411]: I0223 13:16:02.857222 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:16:02.869785 master-0 kubenswrapper[17411]: I0223 13:16:02.869719 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:16:02.870081 master-0 kubenswrapper[17411]: E0223 13:16:02.870042 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(38b7ce474df02ea287eb02ea513a627a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" Feb 23 13:16:06.823545 master-0 kubenswrapper[17411]: I0223 13:16:06.823474 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:16:06.824477 master-0 kubenswrapper[17411]: I0223 13:16:06.824438 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:16:06.824797 master-0 kubenswrapper[17411]: I0223 13:16:06.824755 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" containerID="cri-o://1451bfe95dea492070e81afea279bb401c056a53aa2057f0e288509531e88c91" gracePeriod=15 Feb 23 13:16:06.824881 master-0 kubenswrapper[17411]: I0223 13:16:06.824813 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" containerID="cri-o://061d7a30e7243aaf925347846dddb4f9e340978170f0d9805e39811eeb5a64eb" gracePeriod=15 Feb 23 13:16:06.824978 master-0 kubenswrapper[17411]: I0223 13:16:06.824890 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://219fe31af98ac0a70bf5c99e980eff392eafdb712a96f15192f2e77ddadeb718" gracePeriod=15 Feb 23 13:16:06.825136 master-0 kubenswrapper[17411]: I0223 13:16:06.824978 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" containerID="cri-o://dc5ce8696fe6f5fe40f802dd027c3d1021d387667d3f9353461a3632d607781a" gracePeriod=15 Feb 23 13:16:06.825225 master-0 kubenswrapper[17411]: I0223 13:16:06.825070 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://af37724971496c567478e8ee1bc3c4cea631a17cbc43ca93ff3d0e2473a64b7f" gracePeriod=15 Feb 23 13:16:06.826157 master-0 kubenswrapper[17411]: I0223 13:16:06.826125 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.826739 master-0 kubenswrapper[17411]: I0223 13:16:06.826700 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:16:06.826998 master-0 kubenswrapper[17411]: E0223 13:16:06.826966 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="setup" Feb 23 13:16:06.826998 master-0 kubenswrapper[17411]: I0223 13:16:06.826988 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="setup" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: E0223 13:16:06.827007 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: I0223 13:16:06.827016 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: E0223 13:16:06.827027 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: I0223 13:16:06.827035 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: E0223 13:16:06.827059 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: I0223 13:16:06.827067 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: E0223 13:16:06.827076 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: I0223 13:16:06.827082 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: E0223 13:16:06.827100 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 23 13:16:06.827165 master-0 kubenswrapper[17411]: I0223 13:16:06.827107 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 23 13:16:06.828076 master-0 kubenswrapper[17411]: I0223 13:16:06.827331 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:16:06.828076 master-0 kubenswrapper[17411]: I0223 13:16:06.827355 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 23 13:16:06.828076 master-0 kubenswrapper[17411]: I0223 13:16:06.827365 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 23 13:16:06.828076 master-0 kubenswrapper[17411]: I0223 13:16:06.827381 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 23 13:16:06.828076 master-0 kubenswrapper[17411]: I0223 13:16:06.827396 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 23 13:16:06.842089 master-0 kubenswrapper[17411]: I0223 13:16:06.842035 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="888e23114cf20f3bf6573c5f7b88d7d0" podUID="959c75833224b4ba3fa488b77d8f5032" Feb 23 13:16:06.892951 master-0 kubenswrapper[17411]: I0223 13:16:06.892867 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.892951 master-0 kubenswrapper[17411]: I0223 13:16:06.892953 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.892998 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.893026 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.893056 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.893081 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.893159 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.893202 master-0 kubenswrapper[17411]: I0223 13:16:06.893205 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995235 master-0 kubenswrapper[17411]: I0223 13:16:06.995087 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995235 master-0 kubenswrapper[17411]: I0223 13:16:06.995176 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995484 master-0 kubenswrapper[17411]: I0223 13:16:06.995353 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995535 master-0 kubenswrapper[17411]: I0223 13:16:06.995483 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995604 master-0 kubenswrapper[17411]: I0223 13:16:06.995579 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995657 master-0 kubenswrapper[17411]: I0223 13:16:06.995622 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995708 master-0 kubenswrapper[17411]: I0223 13:16:06.995669 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995708 master-0 kubenswrapper[17411]: I0223 13:16:06.995694 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995796 master-0 kubenswrapper[17411]: I0223 13:16:06.995721 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995796 master-0 kubenswrapper[17411]: I0223 13:16:06.995761 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995796 master-0 kubenswrapper[17411]: I0223 13:16:06.995775 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995940 master-0 kubenswrapper[17411]: I0223 13:16:06.995822 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.995940 master-0 kubenswrapper[17411]: I0223 13:16:06.995841 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995940 master-0 kubenswrapper[17411]: I0223 13:16:06.995868 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/959c75833224b4ba3fa488b77d8f5032-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"959c75833224b4ba3fa488b77d8f5032\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:06.995940 master-0 kubenswrapper[17411]: I0223 13:16:06.995907 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:06.996112 master-0 kubenswrapper[17411]: I0223 13:16:06.995981 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:07.554838 master-0 kubenswrapper[17411]: I0223 13:16:07.554720 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:07.555124 master-0 kubenswrapper[17411]: I0223 13:16:07.554873 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:07.555124 master-0 kubenswrapper[17411]: I0223 13:16:07.554765 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:07.555124 master-0 kubenswrapper[17411]: I0223 13:16:07.555028 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:07.555124 master-0 kubenswrapper[17411]: I0223 13:16:07.555109 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:16:07.555933 master-0 kubenswrapper[17411]: I0223 13:16:07.555892 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc"} pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 23 13:16:07.556004 master-0 kubenswrapper[17411]: I0223 13:16:07.555942 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" containerID="cri-o://a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc" gracePeriod=30 Feb 23 13:16:07.574777 master-0 kubenswrapper[17411]: I0223 13:16:07.574723 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:59622->10.128.0.77:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:07.574943 master-0 kubenswrapper[17411]: I0223 13:16:07.574787 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:59622->10.128.0.77:8443: read: connection reset by peer" Feb 23 13:16:07.733212 master-0 kubenswrapper[17411]: I0223 13:16:07.732376 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:16:07.751400 master-0 kubenswrapper[17411]: I0223 13:16:07.746270 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:16:07.821531 master-0 kubenswrapper[17411]: W0223 13:16:07.821371 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafeec80f2ec1ff5cb32c2367912befef.slice/crio-3d8744707275dcfbbbdb65b4d93cecd147c0c0062a9470db33589587fca77c01 WatchSource:0}: Error finding container 3d8744707275dcfbbbdb65b4d93cecd147c0c0062a9470db33589587fca77c01: Status 404 returned error can't find the container with id 3d8744707275dcfbbbdb65b4d93cecd147c0c0062a9470db33589587fca77c01 Feb 23 13:16:07.898058 master-0 kubenswrapper[17411]: I0223 13:16:07.898002 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 23 13:16:07.898592 master-0 kubenswrapper[17411]: I0223 13:16:07.898553 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="061d7a30e7243aaf925347846dddb4f9e340978170f0d9805e39811eeb5a64eb" exitCode=0 Feb 23 13:16:07.898592 master-0 kubenswrapper[17411]: I0223 13:16:07.898575 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="af37724971496c567478e8ee1bc3c4cea631a17cbc43ca93ff3d0e2473a64b7f" exitCode=0 Feb 23 13:16:07.898592 master-0 kubenswrapper[17411]: I0223 13:16:07.898585 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="dc5ce8696fe6f5fe40f802dd027c3d1021d387667d3f9353461a3632d607781a" exitCode=2 Feb 23 13:16:07.900069 master-0 kubenswrapper[17411]: I0223 13:16:07.900041 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"afeec80f2ec1ff5cb32c2367912befef","Type":"ContainerStarted","Data":"3d8744707275dcfbbbdb65b4d93cecd147c0c0062a9470db33589587fca77c01"} Feb 23 13:16:07.901939 master-0 kubenswrapper[17411]: I0223 13:16:07.901901 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/2.log" Feb 23 13:16:07.902833 master-0 kubenswrapper[17411]: I0223 13:16:07.902779 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/1.log" Feb 23 13:16:07.902908 master-0 kubenswrapper[17411]: I0223 13:16:07.902834 17411 generic.go:334] "Generic (PLEG): container finished" podID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerID="a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc" exitCode=255 Feb 23 13:16:07.902908 master-0 kubenswrapper[17411]: I0223 13:16:07.902891 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerDied","Data":"a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc"} Feb 23 13:16:07.903007 master-0 kubenswrapper[17411]: I0223 13:16:07.902923 17411 scope.go:117] "RemoveContainer" containerID="210c8e907b7f1420aff40ed4701535339338f1ccae52bfb676a956d7c1157621" Feb 23 13:16:07.905084 master-0 kubenswrapper[17411]: I0223 13:16:07.905045 17411 generic.go:334] "Generic (PLEG): container finished" podID="23f6e482-8da1-4df0-8de6-66a930e45a20" containerID="7e430dd00f0a0105863d8293fdc97c4fe96bc4ed6b8ff010a52f450aad23346b" exitCode=0 Feb 23 13:16:07.905165 master-0 kubenswrapper[17411]: I0223 13:16:07.905105 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"23f6e482-8da1-4df0-8de6-66a930e45a20","Type":"ContainerDied","Data":"7e430dd00f0a0105863d8293fdc97c4fe96bc4ed6b8ff010a52f450aad23346b"} Feb 23 13:16:08.869110 master-0 kubenswrapper[17411]: I0223 13:16:08.868975 17411 scope.go:117] "RemoveContainer" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" Feb 23 13:16:08.869338 master-0 kubenswrapper[17411]: E0223 13:16:08.869284 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:16:08.918836 master-0 kubenswrapper[17411]: I0223 13:16:08.918734 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"afeec80f2ec1ff5cb32c2367912befef","Type":"ContainerStarted","Data":"0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb"} Feb 23 13:16:08.920587 master-0 kubenswrapper[17411]: I0223 13:16:08.920523 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/2.log" Feb 23 13:16:08.920747 master-0 kubenswrapper[17411]: I0223 13:16:08.920708 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"7cad404ca76efda43343352d885646b7d9999a244c40ac96a495b9212da0c05b"} Feb 23 13:16:08.921016 master-0 kubenswrapper[17411]: I0223 13:16:08.920982 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:16:08.924605 master-0 kubenswrapper[17411]: I0223 13:16:08.924574 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 23 13:16:08.925427 master-0 kubenswrapper[17411]: I0223 13:16:08.925381 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="219fe31af98ac0a70bf5c99e980eff392eafdb712a96f15192f2e77ddadeb718" exitCode=0 Feb 23 13:16:08.931293 master-0 kubenswrapper[17411]: I0223 13:16:08.931260 17411 generic.go:334] "Generic (PLEG): container finished" podID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerID="7ae02e0df64340d5796187bee35b0a226bdb253a9ea0b0f2d5eec150f3a915b5" exitCode=0 Feb 23 13:16:08.931390 master-0 kubenswrapper[17411]: I0223 13:16:08.931345 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerDied","Data":"7ae02e0df64340d5796187bee35b0a226bdb253a9ea0b0f2d5eec150f3a915b5"} Feb 23 13:16:08.931446 master-0 kubenswrapper[17411]: I0223 13:16:08.931398 17411 scope.go:117] "RemoveContainer" containerID="f95ba38760f7dc259e69f00ebd4eabf8bd09b35de53d8f84cbae1abd114eb259" Feb 23 13:16:08.932145 master-0 kubenswrapper[17411]: I0223 13:16:08.932037 17411 scope.go:117] "RemoveContainer" containerID="7ae02e0df64340d5796187bee35b0a226bdb253a9ea0b0f2d5eec150f3a915b5" Feb 23 13:16:09.280603 master-0 kubenswrapper[17411]: I0223 13:16:09.280535 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:16:09.341050 master-0 kubenswrapper[17411]: I0223 13:16:09.340956 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir\") pod \"23f6e482-8da1-4df0-8de6-66a930e45a20\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " Feb 23 13:16:09.341371 master-0 kubenswrapper[17411]: I0223 13:16:09.341074 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock\") pod \"23f6e482-8da1-4df0-8de6-66a930e45a20\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " Feb 23 13:16:09.341371 master-0 kubenswrapper[17411]: I0223 13:16:09.341117 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access\") pod \"23f6e482-8da1-4df0-8de6-66a930e45a20\" (UID: \"23f6e482-8da1-4df0-8de6-66a930e45a20\") " Feb 23 13:16:09.341371 master-0 kubenswrapper[17411]: I0223 13:16:09.341115 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23f6e482-8da1-4df0-8de6-66a930e45a20" (UID: "23f6e482-8da1-4df0-8de6-66a930e45a20"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:09.341371 master-0 kubenswrapper[17411]: I0223 13:16:09.341142 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock" (OuterVolumeSpecName: "var-lock") pod "23f6e482-8da1-4df0-8de6-66a930e45a20" (UID: "23f6e482-8da1-4df0-8de6-66a930e45a20"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:09.341740 master-0 kubenswrapper[17411]: I0223 13:16:09.341685 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:09.341740 master-0 kubenswrapper[17411]: I0223 13:16:09.341712 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23f6e482-8da1-4df0-8de6-66a930e45a20-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:09.346525 master-0 kubenswrapper[17411]: I0223 13:16:09.346466 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23f6e482-8da1-4df0-8de6-66a930e45a20" (UID: "23f6e482-8da1-4df0-8de6-66a930e45a20"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:16:09.443407 master-0 kubenswrapper[17411]: I0223 13:16:09.443337 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23f6e482-8da1-4df0-8de6-66a930e45a20-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:09.921986 master-0 kubenswrapper[17411]: I0223 13:16:09.921781 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:09.921986 master-0 kubenswrapper[17411]: I0223 13:16:09.921915 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:09.940328 master-0 kubenswrapper[17411]: I0223 13:16:09.940277 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Feb 23 13:16:09.940648 master-0 kubenswrapper[17411]: I0223 13:16:09.940554 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"23f6e482-8da1-4df0-8de6-66a930e45a20","Type":"ContainerDied","Data":"b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57"} Feb 23 13:16:09.940648 master-0 kubenswrapper[17411]: I0223 13:16:09.940630 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4cbd448858d62088101bec41ca7077f45d000b302c703d290e5d6c85d16df57" Feb 23 13:16:09.943443 master-0 kubenswrapper[17411]: I0223 13:16:09.943386 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerStarted","Data":"c46456f1ed6992fcaa7efa9da58c257125d42b7b803815f762f0ce0032f75935"} Feb 23 13:16:10.944228 master-0 kubenswrapper[17411]: I0223 13:16:10.944129 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:10.944228 master-0 kubenswrapper[17411]: I0223 13:16:10.944204 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:12.576508 master-0 kubenswrapper[17411]: E0223 13:16:12.576453 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 23 13:16:13.974698 master-0 kubenswrapper[17411]: I0223 13:16:13.974628 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/4.log" Feb 23 13:16:13.975574 master-0 kubenswrapper[17411]: I0223 13:16:13.975522 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/3.log" Feb 23 13:16:13.976072 master-0 kubenswrapper[17411]: I0223 13:16:13.976031 17411 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a" exitCode=1 Feb 23 13:16:13.976128 master-0 kubenswrapper[17411]: I0223 13:16:13.976075 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a"} Feb 23 13:16:13.976128 master-0 kubenswrapper[17411]: I0223 13:16:13.976112 17411 scope.go:117] "RemoveContainer" containerID="09a2a812dfc074881e48f1809e4ebec8c0991b3f0115d4c4a42f2f9c39b6c609" Feb 23 13:16:13.977524 master-0 kubenswrapper[17411]: I0223 13:16:13.977454 17411 scope.go:117] "RemoveContainer" containerID="0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a" Feb 23 13:16:13.978440 master-0 kubenswrapper[17411]: E0223 13:16:13.978336 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:16:14.992289 master-0 kubenswrapper[17411]: I0223 13:16:14.992190 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/4.log" Feb 23 13:16:15.868470 master-0 kubenswrapper[17411]: I0223 13:16:15.868386 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:16:17.014168 master-0 kubenswrapper[17411]: I0223 13:16:17.014107 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:16:17.016309 master-0 kubenswrapper[17411]: I0223 13:16:17.016269 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:16:17.016415 master-0 kubenswrapper[17411]: I0223 13:16:17.016328 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"42cdeb8b7eb8c28b7cf71798320b73487eab2a374dc84ef2d6218c3ff6c02e03"} Feb 23 13:16:17.610455 master-0 kubenswrapper[17411]: I0223 13:16:17.610306 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:17.610455 master-0 kubenswrapper[17411]: I0223 13:16:17.610338 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:17.610831 master-0 kubenswrapper[17411]: I0223 13:16:17.610456 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:17.610831 master-0 kubenswrapper[17411]: I0223 13:16:17.610372 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:21.056430 master-0 kubenswrapper[17411]: I0223 13:16:21.056337 17411 generic.go:334] "Generic (PLEG): container finished" podID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" containerID="2a82c81816ea58ba55512744c24143ddbc2f5aefd0d2aef524a9297835676cb3" exitCode=0 Feb 23 13:16:21.056430 master-0 kubenswrapper[17411]: I0223 13:16:21.056415 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerDied","Data":"2a82c81816ea58ba55512744c24143ddbc2f5aefd0d2aef524a9297835676cb3"} Feb 23 13:16:21.057624 master-0 kubenswrapper[17411]: I0223 13:16:21.057104 17411 scope.go:117] "RemoveContainer" containerID="2a82c81816ea58ba55512744c24143ddbc2f5aefd0d2aef524a9297835676cb3" Feb 23 13:16:22.066558 master-0 kubenswrapper[17411]: I0223 13:16:22.066479 17411 generic.go:334] "Generic (PLEG): container finished" podID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" containerID="90c4d565bc8a9a3504b08ffb42ce37fbe9564d90f4149f9a2efe531a546f0e50" exitCode=0 Feb 23 13:16:22.067080 master-0 kubenswrapper[17411]: I0223 13:16:22.066571 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerDied","Data":"90c4d565bc8a9a3504b08ffb42ce37fbe9564d90f4149f9a2efe531a546f0e50"} Feb 23 13:16:22.067080 master-0 kubenswrapper[17411]: I0223 13:16:22.066616 17411 scope.go:117] "RemoveContainer" containerID="723e0d3ac0bfebcf9019d23491b2a123aaa94b496865e7bf006a731caaf79830" Feb 23 13:16:22.067207 master-0 kubenswrapper[17411]: I0223 13:16:22.067196 17411 scope.go:117] "RemoveContainer" containerID="90c4d565bc8a9a3504b08ffb42ce37fbe9564d90f4149f9a2efe531a546f0e50" Feb 23 13:16:22.070167 master-0 kubenswrapper[17411]: I0223 13:16:22.070124 17411 generic.go:334] "Generic (PLEG): container finished" podID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" containerID="f20870fedd39a5fcac2849dfe260df528edaaae565ef9981e8dd778b3bbb8634" exitCode=0 Feb 23 13:16:22.070256 master-0 kubenswrapper[17411]: I0223 13:16:22.070177 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" event={"ID":"c33f208a-e158-47e2-83d5-ac792bf3a1d5","Type":"ContainerDied","Data":"f20870fedd39a5fcac2849dfe260df528edaaae565ef9981e8dd778b3bbb8634"} Feb 23 13:16:22.070533 master-0 kubenswrapper[17411]: I0223 13:16:22.070506 17411 scope.go:117] "RemoveContainer" containerID="f20870fedd39a5fcac2849dfe260df528edaaae565ef9981e8dd778b3bbb8634" Feb 23 13:16:22.073183 master-0 kubenswrapper[17411]: I0223 13:16:22.073139 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerDied","Data":"6eb708e99faa68cc0fb3a1744a6c33cf30aa202ca3b55e421e64cd3dbc5a07f1"} Feb 23 13:16:22.073305 master-0 kubenswrapper[17411]: I0223 13:16:22.073057 17411 generic.go:334] "Generic (PLEG): container finished" podID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" containerID="6eb708e99faa68cc0fb3a1744a6c33cf30aa202ca3b55e421e64cd3dbc5a07f1" exitCode=0 Feb 23 13:16:22.073509 master-0 kubenswrapper[17411]: I0223 13:16:22.073487 17411 scope.go:117] "RemoveContainer" containerID="6eb708e99faa68cc0fb3a1744a6c33cf30aa202ca3b55e421e64cd3dbc5a07f1" Feb 23 13:16:22.080450 master-0 kubenswrapper[17411]: I0223 13:16:22.077601 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/0.log" Feb 23 13:16:22.080450 master-0 kubenswrapper[17411]: I0223 13:16:22.077666 17411 generic.go:334] "Generic (PLEG): container finished" podID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" containerID="276f3b55300c4b42b7df0ff3b3561d901d7c658a4848ac016dd56a91f3b44118" exitCode=0 Feb 23 13:16:22.080450 master-0 kubenswrapper[17411]: I0223 13:16:22.077783 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerDied","Data":"276f3b55300c4b42b7df0ff3b3561d901d7c658a4848ac016dd56a91f3b44118"} Feb 23 13:16:22.080450 master-0 kubenswrapper[17411]: I0223 13:16:22.080102 17411 scope.go:117] "RemoveContainer" containerID="276f3b55300c4b42b7df0ff3b3561d901d7c658a4848ac016dd56a91f3b44118" Feb 23 13:16:22.097486 master-0 kubenswrapper[17411]: I0223 13:16:22.097446 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/1.log" Feb 23 13:16:22.098844 master-0 kubenswrapper[17411]: I0223 13:16:22.098807 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="1d00be7013db5f4871f8f9fcca38d13b794aeb731da6878ede81daa395d911d9" exitCode=0 Feb 23 13:16:22.098911 master-0 kubenswrapper[17411]: I0223 13:16:22.098859 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"1d00be7013db5f4871f8f9fcca38d13b794aeb731da6878ede81daa395d911d9"} Feb 23 13:16:22.099198 master-0 kubenswrapper[17411]: I0223 13:16:22.099166 17411 scope.go:117] "RemoveContainer" containerID="1d00be7013db5f4871f8f9fcca38d13b794aeb731da6878ede81daa395d911d9" Feb 23 13:16:22.103126 master-0 kubenswrapper[17411]: I0223 13:16:22.103094 17411 generic.go:334] "Generic (PLEG): container finished" podID="25b5540c-da7d-4b6f-a15f-394451f4674e" containerID="93e9de56164a0387038f634504ac664a837d38dcf48d420691331e0584258696" exitCode=0 Feb 23 13:16:22.103173 master-0 kubenswrapper[17411]: I0223 13:16:22.103139 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerDied","Data":"93e9de56164a0387038f634504ac664a837d38dcf48d420691331e0584258696"} Feb 23 13:16:22.103436 master-0 kubenswrapper[17411]: I0223 13:16:22.103407 17411 scope.go:117] "RemoveContainer" containerID="93e9de56164a0387038f634504ac664a837d38dcf48d420691331e0584258696" Feb 23 13:16:22.105855 master-0 kubenswrapper[17411]: I0223 13:16:22.105819 17411 generic.go:334] "Generic (PLEG): container finished" podID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerID="2ce8dd30e28f7373e2d6bc5d3ffecbad9102db5068c6325288481dd16f27c6a9" exitCode=0 Feb 23 13:16:22.105904 master-0 kubenswrapper[17411]: I0223 13:16:22.105874 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerDied","Data":"2ce8dd30e28f7373e2d6bc5d3ffecbad9102db5068c6325288481dd16f27c6a9"} Feb 23 13:16:22.106193 master-0 kubenswrapper[17411]: I0223 13:16:22.106163 17411 scope.go:117] "RemoveContainer" containerID="2ce8dd30e28f7373e2d6bc5d3ffecbad9102db5068c6325288481dd16f27c6a9" Feb 23 13:16:22.108395 master-0 kubenswrapper[17411]: I0223 13:16:22.108367 17411 generic.go:334] "Generic (PLEG): container finished" podID="b7585f9f-12e5-451b-beeb-db43ae778f25" containerID="e56396e411b12f7186290221f3fddfff3f3b0e11c3f756be37a285081dee7384" exitCode=0 Feb 23 13:16:22.108466 master-0 kubenswrapper[17411]: I0223 13:16:22.108393 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerDied","Data":"e56396e411b12f7186290221f3fddfff3f3b0e11c3f756be37a285081dee7384"} Feb 23 13:16:22.108770 master-0 kubenswrapper[17411]: I0223 13:16:22.108745 17411 scope.go:117] "RemoveContainer" containerID="e56396e411b12f7186290221f3fddfff3f3b0e11c3f756be37a285081dee7384" Feb 23 13:16:22.109617 master-0 kubenswrapper[17411]: I0223 13:16:22.109583 17411 generic.go:334] "Generic (PLEG): container finished" podID="71a07622-3038-4b8c-b6bb-5f28a4115012" containerID="049f73307f806904035423cc3efd5b594e3e2163521bdc03014ba97dd009ed14" exitCode=0 Feb 23 13:16:22.109617 master-0 kubenswrapper[17411]: I0223 13:16:22.109606 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerDied","Data":"049f73307f806904035423cc3efd5b594e3e2163521bdc03014ba97dd009ed14"} Feb 23 13:16:22.109941 master-0 kubenswrapper[17411]: I0223 13:16:22.109907 17411 scope.go:117] "RemoveContainer" containerID="049f73307f806904035423cc3efd5b594e3e2163521bdc03014ba97dd009ed14" Feb 23 13:16:22.112796 master-0 kubenswrapper[17411]: I0223 13:16:22.112762 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 23 13:16:22.113168 master-0 kubenswrapper[17411]: I0223 13:16:22.113141 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver/0.log" Feb 23 13:16:22.113505 master-0 kubenswrapper[17411]: I0223 13:16:22.113476 17411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="1451bfe95dea492070e81afea279bb401c056a53aa2057f0e288509531e88c91" exitCode=137 Feb 23 13:16:22.115076 master-0 kubenswrapper[17411]: I0223 13:16:22.115045 17411 generic.go:334] "Generic (PLEG): container finished" podID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" containerID="5746b4ef817cfb0913d62f6abec0cfefcc90fea76e17ad5446db2699e58dc8b7" exitCode=0 Feb 23 13:16:22.115141 master-0 kubenswrapper[17411]: I0223 13:16:22.115096 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerDied","Data":"5746b4ef817cfb0913d62f6abec0cfefcc90fea76e17ad5446db2699e58dc8b7"} Feb 23 13:16:22.115619 master-0 kubenswrapper[17411]: I0223 13:16:22.115588 17411 scope.go:117] "RemoveContainer" containerID="5746b4ef817cfb0913d62f6abec0cfefcc90fea76e17ad5446db2699e58dc8b7" Feb 23 13:16:22.122395 master-0 kubenswrapper[17411]: I0223 13:16:22.122209 17411 generic.go:334] "Generic (PLEG): container finished" podID="85958edf-e3da-4704-8f09-cf049101f2e6" containerID="4272a362a8ac66f27c39149ee8833cfb7199e96eefc438602afcb38577af4828" exitCode=0 Feb 23 13:16:22.123426 master-0 kubenswrapper[17411]: I0223 13:16:22.123398 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerDied","Data":"4272a362a8ac66f27c39149ee8833cfb7199e96eefc438602afcb38577af4828"} Feb 23 13:16:22.123768 master-0 kubenswrapper[17411]: I0223 13:16:22.123748 17411 scope.go:117] "RemoveContainer" containerID="4272a362a8ac66f27c39149ee8833cfb7199e96eefc438602afcb38577af4828" Feb 23 13:16:22.137816 master-0 kubenswrapper[17411]: I0223 13:16:22.137759 17411 generic.go:334] "Generic (PLEG): container finished" podID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" containerID="2232814e0e6f0bab57129339d23cb902f8963539e1dee1b616d27df4af9358d9" exitCode=0 Feb 23 13:16:22.137956 master-0 kubenswrapper[17411]: I0223 13:16:22.137842 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerDied","Data":"2232814e0e6f0bab57129339d23cb902f8963539e1dee1b616d27df4af9358d9"} Feb 23 13:16:22.138409 master-0 kubenswrapper[17411]: I0223 13:16:22.138380 17411 scope.go:117] "RemoveContainer" containerID="2232814e0e6f0bab57129339d23cb902f8963539e1dee1b616d27df4af9358d9" Feb 23 13:16:22.140804 master-0 kubenswrapper[17411]: I0223 13:16:22.140764 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" containerID="d189d5e12511ea80f4cdc17d241c4679d026c6da1f0e8d962f34e26c49ed72ca" exitCode=0 Feb 23 13:16:22.140873 master-0 kubenswrapper[17411]: I0223 13:16:22.140823 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" event={"ID":"c2c8336c-0733-4e20-85ec-062e07b6fdc0","Type":"ContainerDied","Data":"d189d5e12511ea80f4cdc17d241c4679d026c6da1f0e8d962f34e26c49ed72ca"} Feb 23 13:16:22.141263 master-0 kubenswrapper[17411]: I0223 13:16:22.141211 17411 scope.go:117] "RemoveContainer" containerID="d189d5e12511ea80f4cdc17d241c4679d026c6da1f0e8d962f34e26c49ed72ca" Feb 23 13:16:22.143405 master-0 kubenswrapper[17411]: I0223 13:16:22.143364 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerStarted","Data":"eaf5c82575ca53cf64738eafa679d56a86938238183995384c4ed1f6782f3ea2"} Feb 23 13:16:22.181326 master-0 kubenswrapper[17411]: I0223 13:16:22.181092 17411 scope.go:117] "RemoveContainer" containerID="3ae29be9fa54806971b4e3b9c2201c003f7b8a22a37869a91acf05e5506d41f9" Feb 23 13:16:22.344171 master-0 kubenswrapper[17411]: I0223 13:16:22.344081 17411 scope.go:117] "RemoveContainer" containerID="debed11d31f7b75fad2471852851fc7fa04c00d3d8576daf98e7b22222001920" Feb 23 13:16:22.519296 master-0 kubenswrapper[17411]: I0223 13:16:22.516567 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:16:22.519296 master-0 kubenswrapper[17411]: I0223 13:16:22.516620 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:16:22.539014 master-0 kubenswrapper[17411]: I0223 13:16:22.538971 17411 scope.go:117] "RemoveContainer" containerID="c62b96fd922cdecfa004e96b0409b64671fda2f755f956fa786e2d7faadf3475" Feb 23 13:16:22.567658 master-0 kubenswrapper[17411]: I0223 13:16:22.567607 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 23 13:16:22.568559 master-0 kubenswrapper[17411]: I0223 13:16:22.568278 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver/0.log" Feb 23 13:16:22.568920 master-0 kubenswrapper[17411]: I0223 13:16:22.568885 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:22.651531 master-0 kubenswrapper[17411]: I0223 13:16:22.649356 17411 scope.go:117] "RemoveContainer" containerID="c7bf15e370636a4712d661fd1bd5bae0ffc88b863a6740ad094330d58359da39" Feb 23 13:16:22.702825 master-0 kubenswrapper[17411]: I0223 13:16:22.700422 17411 scope.go:117] "RemoveContainer" containerID="fc76a6ebf82c376de367ae9069a978505805d785a26a3e42e6dad2867b699aeb" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705533 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705572 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705707 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705693 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705773 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.705864 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.706287 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.706468 17411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:22.706794 master-0 kubenswrapper[17411]: I0223 13:16:22.706731 17411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:16:22.836314 master-0 kubenswrapper[17411]: I0223 13:16:22.835922 17411 scope.go:117] "RemoveContainer" containerID="bc8ade9334364114738902823dc600f3740baca0ab4d65155426a77698e2093f" Feb 23 13:16:22.909776 master-0 kubenswrapper[17411]: I0223 13:16:22.907645 17411 scope.go:117] "RemoveContainer" containerID="8ede5ecb3a272a47d1a15ebb39f7a70622cc8eaa31a144f09ad6e73baceca956" Feb 23 13:16:22.919045 master-0 kubenswrapper[17411]: I0223 13:16:22.916604 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888e23114cf20f3bf6573c5f7b88d7d0" path="/var/lib/kubelet/pods/888e23114cf20f3bf6573c5f7b88d7d0/volumes" Feb 23 13:16:22.919045 master-0 kubenswrapper[17411]: I0223 13:16:22.917462 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:22.919045 master-0 kubenswrapper[17411]: I0223 13:16:22.918877 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:23.171643 master-0 kubenswrapper[17411]: I0223 13:16:23.169849 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerStarted","Data":"1a0344d531e84ba87458cf9e245595bf26beb8556c42c2a98575065196b12964"} Feb 23 13:16:23.172314 master-0 kubenswrapper[17411]: I0223 13:16:23.172277 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" event={"ID":"c2c8336c-0733-4e20-85ec-062e07b6fdc0","Type":"ContainerStarted","Data":"308918d612c965236a3f0fcd42415b3eb575a632bd917ed7acfcbcdf1727a22f"} Feb 23 13:16:23.179844 master-0 kubenswrapper[17411]: I0223 13:16:23.179780 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerStarted","Data":"2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339"} Feb 23 13:16:23.179924 master-0 kubenswrapper[17411]: I0223 13:16:23.179844 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:16:23.180625 master-0 kubenswrapper[17411]: E0223 13:16:23.180375 17411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e285e6df5cea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:afeec80f2ec1ff5cb32c2367912befef,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container: startup-monitor,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:16:08.148794602 +0000 UTC m=+561.576301199,LastTimestamp:2026-02-23 13:16:08.148794602 +0000 UTC m=+561.576301199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:16:23.180865 master-0 kubenswrapper[17411]: I0223 13:16:23.180642 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38"} Feb 23 13:16:23.181169 master-0 kubenswrapper[17411]: I0223 13:16:23.181137 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:23.182601 master-0 kubenswrapper[17411]: I0223 13:16:23.182558 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerStarted","Data":"a46afb690c12f34d591fbefec336bbc94039270416c52a883ecc6b6372765700"} Feb 23 13:16:23.186328 master-0 kubenswrapper[17411]: I0223 13:16:23.185981 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 23 13:16:23.191297 master-0 kubenswrapper[17411]: I0223 13:16:23.186645 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver/0.log" Feb 23 13:16:23.191297 master-0 kubenswrapper[17411]: I0223 13:16:23.187587 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:23.191297 master-0 kubenswrapper[17411]: I0223 13:16:23.187595 17411 scope.go:117] "RemoveContainer" containerID="061d7a30e7243aaf925347846dddb4f9e340978170f0d9805e39811eeb5a64eb" Feb 23 13:16:23.191297 master-0 kubenswrapper[17411]: I0223 13:16:23.190715 17411 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23f6e482-8da1-4df0-8de6-66a930e45a20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"reason\\\":\\\"PodCompleted\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:16:07Z\\\",\\\"reason\\\":\\\"PodCompleted\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:16:07Z\\\",\\\"reason\\\":\\\"PodCompleted\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e430dd00f0a0105863d8293fdc97c4fe96bc4ed6b8ff010a52f450aad23346b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"installer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e430dd00f0a0105863d8293fdc97c4fe96bc4ed6b8ff010a52f450aad23346b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/\\\",\\\"name\\\":\\\"kubelet-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"installer-4-retry-1-master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0/status\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.191297 master-0 kubenswrapper[17411]: I0223 13:16:23.191221 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.199744 master-0 kubenswrapper[17411]: I0223 13:16:23.199679 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.200789 master-0 kubenswrapper[17411]: I0223 13:16:23.200744 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.201233 master-0 kubenswrapper[17411]: I0223 13:16:23.201191 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.201827 master-0 kubenswrapper[17411]: I0223 13:16:23.201748 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.202129 master-0 kubenswrapper[17411]: I0223 13:16:23.202093 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.202587 master-0 kubenswrapper[17411]: I0223 13:16:23.202549 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.202923 master-0 kubenswrapper[17411]: I0223 13:16:23.202885 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.203230 master-0 kubenswrapper[17411]: I0223 13:16:23.203196 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.203752 master-0 kubenswrapper[17411]: I0223 13:16:23.203693 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.204260 master-0 kubenswrapper[17411]: I0223 13:16:23.204201 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.204326 master-0 kubenswrapper[17411]: I0223 13:16:23.204266 17411 scope.go:117] "RemoveContainer" containerID="af37724971496c567478e8ee1bc3c4cea631a17cbc43ca93ff3d0e2473a64b7f" Feb 23 13:16:23.204688 master-0 kubenswrapper[17411]: I0223 13:16:23.204652 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.205115 master-0 kubenswrapper[17411]: I0223 13:16:23.205079 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.205544 master-0 kubenswrapper[17411]: I0223 13:16:23.205509 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.206045 master-0 kubenswrapper[17411]: I0223 13:16:23.206011 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.206513 master-0 kubenswrapper[17411]: I0223 13:16:23.206479 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.206894 master-0 kubenswrapper[17411]: I0223 13:16:23.206859 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.207294 master-0 kubenswrapper[17411]: I0223 13:16:23.207260 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.207729 master-0 kubenswrapper[17411]: I0223 13:16:23.207697 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.208102 master-0 kubenswrapper[17411]: I0223 13:16:23.208069 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.208498 master-0 kubenswrapper[17411]: I0223 13:16:23.208464 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.208871 master-0 kubenswrapper[17411]: I0223 13:16:23.208837 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.216019 master-0 kubenswrapper[17411]: I0223 13:16:23.215975 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerStarted","Data":"fec2b56ffa3c2fda91463659eb4be75b35169045cf2435badc161811557532bd"} Feb 23 13:16:23.218266 master-0 kubenswrapper[17411]: I0223 13:16:23.217160 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.218266 master-0 kubenswrapper[17411]: I0223 13:16:23.217619 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.218266 master-0 kubenswrapper[17411]: I0223 13:16:23.218017 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.218792 master-0 kubenswrapper[17411]: I0223 13:16:23.218722 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.220223 master-0 kubenswrapper[17411]: I0223 13:16:23.219110 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.220223 master-0 kubenswrapper[17411]: I0223 13:16:23.219519 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.220223 master-0 kubenswrapper[17411]: I0223 13:16:23.220018 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.221279 master-0 kubenswrapper[17411]: I0223 13:16:23.220452 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.221279 master-0 kubenswrapper[17411]: I0223 13:16:23.221180 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.224998 master-0 kubenswrapper[17411]: I0223 13:16:23.224969 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.226121 master-0 kubenswrapper[17411]: I0223 13:16:23.226054 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.227305 master-0 kubenswrapper[17411]: I0223 13:16:23.227224 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.227893 master-0 kubenswrapper[17411]: I0223 13:16:23.227834 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.227974 master-0 kubenswrapper[17411]: I0223 13:16:23.227923 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerStarted","Data":"b51fc341743d0ee14779ec259987403cb18ccfb83872ba04b66accc494822766"} Feb 23 13:16:23.230685 master-0 kubenswrapper[17411]: I0223 13:16:23.230594 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.232893 master-0 kubenswrapper[17411]: I0223 13:16:23.232732 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.233350 master-0 kubenswrapper[17411]: I0223 13:16:23.233295 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.237455 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.237943 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.238642 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.239640 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.240332 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.240817 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.241138 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.241583 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.242187 master-0 kubenswrapper[17411]: I0223 13:16:23.241754 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" event={"ID":"c33f208a-e158-47e2-83d5-ac792bf3a1d5","Type":"ContainerStarted","Data":"f51821048115f73bfa5af0633ed01e681db3061c361638cc8683653953349e32"} Feb 23 13:16:23.242751 master-0 kubenswrapper[17411]: I0223 13:16:23.242716 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.243290 master-0 kubenswrapper[17411]: I0223 13:16:23.243113 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.244687 master-0 kubenswrapper[17411]: I0223 13:16:23.243906 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.244687 master-0 kubenswrapper[17411]: I0223 13:16:23.244264 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.245266 master-0 kubenswrapper[17411]: I0223 13:16:23.245047 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.247505 master-0 kubenswrapper[17411]: I0223 13:16:23.245450 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.247505 master-0 kubenswrapper[17411]: I0223 13:16:23.245756 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" event={"ID":"85958edf-e3da-4704-8f09-cf049101f2e6","Type":"ContainerStarted","Data":"572adce0898517c28a62db674ddcd17adbcb67fab14cc4ebab5a178cb0e2af67"} Feb 23 13:16:23.248599 master-0 kubenswrapper[17411]: I0223 13:16:23.248564 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.249012 master-0 kubenswrapper[17411]: I0223 13:16:23.248976 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.249603 master-0 kubenswrapper[17411]: I0223 13:16:23.249563 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.250906 master-0 kubenswrapper[17411]: I0223 13:16:23.249929 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.251954 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.253611 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.253722 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerStarted","Data":"58697c87cd4c1a073964d8c5dbb45b8508190c35e0ffc3e1b2ec68e7b6317288"} Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.254158 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.254754 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.255745 master-0 kubenswrapper[17411]: I0223 13:16:23.255371 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.257917 master-0 kubenswrapper[17411]: I0223 13:16:23.257833 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.260754 master-0 kubenswrapper[17411]: I0223 13:16:23.260691 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.260754 master-0 kubenswrapper[17411]: I0223 13:16:23.260738 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerStarted","Data":"9b83034b1e523498c93eb4e5fde2c67e0c10856a13b30b5b22d21e82983a70f1"} Feb 23 13:16:23.261188 master-0 kubenswrapper[17411]: I0223 13:16:23.261142 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.261904 master-0 kubenswrapper[17411]: I0223 13:16:23.261864 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.263632 master-0 kubenswrapper[17411]: I0223 13:16:23.263584 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.264499 master-0 kubenswrapper[17411]: I0223 13:16:23.264096 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.264606 master-0 kubenswrapper[17411]: I0223 13:16:23.264572 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerStarted","Data":"b4325f84094f6a5f8ce69935fd5dcef125ec5b0e7208b70b7184af2ce6c4e6e7"} Feb 23 13:16:23.265025 master-0 kubenswrapper[17411]: I0223 13:16:23.264971 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.267347 master-0 kubenswrapper[17411]: I0223 13:16:23.266620 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerStarted","Data":"fef4f8449d382c2b35398416206a546296a87b3c5b9bd1199e39bfceb5c14dae"} Feb 23 13:16:23.267771 master-0 kubenswrapper[17411]: I0223 13:16:23.267492 17411 scope.go:117] "RemoveContainer" containerID="219fe31af98ac0a70bf5c99e980eff392eafdb712a96f15192f2e77ddadeb718" Feb 23 13:16:23.269604 master-0 kubenswrapper[17411]: I0223 13:16:23.269513 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.270101 master-0 kubenswrapper[17411]: I0223 13:16:23.270025 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.270650 master-0 kubenswrapper[17411]: I0223 13:16:23.270605 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.271025 master-0 kubenswrapper[17411]: I0223 13:16:23.270982 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.271501 master-0 kubenswrapper[17411]: I0223 13:16:23.271436 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.271851 master-0 kubenswrapper[17411]: I0223 13:16:23.271820 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.272558 master-0 kubenswrapper[17411]: I0223 13:16:23.272486 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.273402 master-0 kubenswrapper[17411]: I0223 13:16:23.273372 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.273952 master-0 kubenswrapper[17411]: I0223 13:16:23.273923 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.274801 master-0 kubenswrapper[17411]: I0223 13:16:23.274766 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.275441 master-0 kubenswrapper[17411]: I0223 13:16:23.275405 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.276014 master-0 kubenswrapper[17411]: I0223 13:16:23.275985 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.283869 master-0 kubenswrapper[17411]: I0223 13:16:23.283805 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.284331 master-0 kubenswrapper[17411]: I0223 13:16:23.284234 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.284865 master-0 kubenswrapper[17411]: I0223 13:16:23.284829 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.286262 master-0 kubenswrapper[17411]: I0223 13:16:23.286180 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.287316 master-0 kubenswrapper[17411]: I0223 13:16:23.287063 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.287873 master-0 kubenswrapper[17411]: I0223 13:16:23.287836 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.288687 master-0 kubenswrapper[17411]: I0223 13:16:23.288658 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.289089 master-0 kubenswrapper[17411]: I0223 13:16:23.289057 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.289960 master-0 kubenswrapper[17411]: I0223 13:16:23.289922 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.290532 master-0 kubenswrapper[17411]: I0223 13:16:23.290493 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.291165 master-0 kubenswrapper[17411]: I0223 13:16:23.291123 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.292739 master-0 kubenswrapper[17411]: I0223 13:16:23.292679 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.293224 master-0 kubenswrapper[17411]: I0223 13:16:23.293157 17411 scope.go:117] "RemoveContainer" containerID="dc5ce8696fe6f5fe40f802dd027c3d1021d387667d3f9353461a3632d607781a" Feb 23 13:16:23.293352 master-0 kubenswrapper[17411]: I0223 13:16:23.293299 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.294415 master-0 kubenswrapper[17411]: I0223 13:16:23.294353 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.296205 master-0 kubenswrapper[17411]: I0223 13:16:23.296156 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.298822 master-0 kubenswrapper[17411]: I0223 13:16:23.298788 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.299199 master-0 kubenswrapper[17411]: I0223 13:16:23.299175 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.299776 master-0 kubenswrapper[17411]: I0223 13:16:23.299751 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.300606 master-0 kubenswrapper[17411]: I0223 13:16:23.300571 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.301731 master-0 kubenswrapper[17411]: I0223 13:16:23.301632 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.302200 master-0 kubenswrapper[17411]: I0223 13:16:23.302149 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.302653 master-0 kubenswrapper[17411]: I0223 13:16:23.302611 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.303735 master-0 kubenswrapper[17411]: I0223 13:16:23.303526 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.304919 master-0 kubenswrapper[17411]: I0223 13:16:23.304872 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.312495 master-0 kubenswrapper[17411]: I0223 13:16:23.312416 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.313202 master-0 kubenswrapper[17411]: I0223 13:16:23.312894 17411 scope.go:117] "RemoveContainer" containerID="1451bfe95dea492070e81afea279bb401c056a53aa2057f0e288509531e88c91" Feb 23 13:16:23.331364 master-0 kubenswrapper[17411]: I0223 13:16:23.331279 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:23.346417 master-0 kubenswrapper[17411]: I0223 13:16:23.346365 17411 scope.go:117] "RemoveContainer" containerID="eb83d6db2b81eff670c43e4f30b6b4176f20d325f24bb246edc1393395f0fde8" Feb 23 13:16:23.869085 master-0 kubenswrapper[17411]: I0223 13:16:23.868925 17411 scope.go:117] "RemoveContainer" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" Feb 23 13:16:24.180053 master-0 kubenswrapper[17411]: I0223 13:16:24.179753 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": context deadline exceeded" start-of-body= Feb 23 13:16:24.180053 master-0 kubenswrapper[17411]: I0223 13:16:24.179857 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": context deadline exceeded" Feb 23 13:16:24.278444 master-0 kubenswrapper[17411]: I0223 13:16:24.278379 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/5.log" Feb 23 13:16:24.278657 master-0 kubenswrapper[17411]: I0223 13:16:24.278542 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8"} Feb 23 13:16:24.280572 master-0 kubenswrapper[17411]: I0223 13:16:24.280511 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.281146 master-0 kubenswrapper[17411]: I0223 13:16:24.281090 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.281676 master-0 kubenswrapper[17411]: I0223 13:16:24.281624 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.282127 master-0 kubenswrapper[17411]: I0223 13:16:24.282091 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:16:24.282209 master-0 kubenswrapper[17411]: I0223 13:16:24.282168 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.283371 master-0 kubenswrapper[17411]: I0223 13:16:24.282854 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.284171 master-0 kubenswrapper[17411]: I0223 13:16:24.284093 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.284537 master-0 kubenswrapper[17411]: I0223 13:16:24.284506 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:16:24.284605 master-0 kubenswrapper[17411]: I0223 13:16:24.284559 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="7be9444f5b625e402453341f193b326bd7008df65bbec6d9b42b674fec217d14" exitCode=0 Feb 23 13:16:24.284605 master-0 kubenswrapper[17411]: I0223 13:16:24.284582 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.285461 master-0 kubenswrapper[17411]: I0223 13:16:24.284763 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerDied","Data":"7be9444f5b625e402453341f193b326bd7008df65bbec6d9b42b674fec217d14"} Feb 23 13:16:24.285461 master-0 kubenswrapper[17411]: I0223 13:16:24.284906 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.286377 master-0 kubenswrapper[17411]: I0223 13:16:24.285811 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.286377 master-0 kubenswrapper[17411]: I0223 13:16:24.285997 17411 scope.go:117] "RemoveContainer" containerID="7be9444f5b625e402453341f193b326bd7008df65bbec6d9b42b674fec217d14" Feb 23 13:16:24.286377 master-0 kubenswrapper[17411]: I0223 13:16:24.286351 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.287454 master-0 kubenswrapper[17411]: I0223 13:16:24.287133 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.287777 master-0 kubenswrapper[17411]: I0223 13:16:24.287734 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.288870 master-0 kubenswrapper[17411]: I0223 13:16:24.288395 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.289095 master-0 kubenswrapper[17411]: I0223 13:16:24.289046 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.289490 master-0 kubenswrapper[17411]: I0223 13:16:24.289466 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.290041 master-0 kubenswrapper[17411]: I0223 13:16:24.289951 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.290571 master-0 kubenswrapper[17411]: I0223 13:16:24.290533 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.291128 master-0 kubenswrapper[17411]: I0223 13:16:24.291066 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.293004 master-0 kubenswrapper[17411]: I0223 13:16:24.292960 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.293961 master-0 kubenswrapper[17411]: I0223 13:16:24.293874 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.294429 master-0 kubenswrapper[17411]: I0223 13:16:24.294385 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.294929 master-0 kubenswrapper[17411]: I0223 13:16:24.294862 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.295555 master-0 kubenswrapper[17411]: I0223 13:16:24.295490 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.295983 master-0 kubenswrapper[17411]: I0223 13:16:24.295933 17411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.296392 master-0 kubenswrapper[17411]: I0223 13:16:24.296350 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.296829 master-0 kubenswrapper[17411]: I0223 13:16:24.296754 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.297533 master-0 kubenswrapper[17411]: I0223 13:16:24.297491 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.298410 master-0 kubenswrapper[17411]: I0223 13:16:24.298053 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.298640 master-0 kubenswrapper[17411]: I0223 13:16:24.298583 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.299177 master-0 kubenswrapper[17411]: I0223 13:16:24.299140 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.299900 master-0 kubenswrapper[17411]: I0223 13:16:24.299852 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.300577 master-0 kubenswrapper[17411]: I0223 13:16:24.300525 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.301520 master-0 kubenswrapper[17411]: I0223 13:16:24.301284 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.302103 master-0 kubenswrapper[17411]: I0223 13:16:24.301996 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.302968 master-0 kubenswrapper[17411]: I0223 13:16:24.302918 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.303551 master-0 kubenswrapper[17411]: I0223 13:16:24.303523 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.304090 master-0 kubenswrapper[17411]: I0223 13:16:24.304054 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.305511 master-0 kubenswrapper[17411]: I0223 13:16:24.305478 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.306304 master-0 kubenswrapper[17411]: I0223 13:16:24.306024 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.306712 master-0 kubenswrapper[17411]: I0223 13:16:24.306662 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.307504 master-0 kubenswrapper[17411]: I0223 13:16:24.307449 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.308112 master-0 kubenswrapper[17411]: I0223 13:16:24.307901 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.308396 master-0 kubenswrapper[17411]: I0223 13:16:24.308372 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.308754 master-0 kubenswrapper[17411]: I0223 13:16:24.308731 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:24.586042 master-0 kubenswrapper[17411]: E0223 13:16:24.585126 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 23 13:16:25.285406 master-0 kubenswrapper[17411]: I0223 13:16:25.285047 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": context deadline exceeded" start-of-body= Feb 23 13:16:25.285406 master-0 kubenswrapper[17411]: I0223 13:16:25.285127 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": context deadline exceeded" Feb 23 13:16:25.300762 master-0 kubenswrapper[17411]: I0223 13:16:25.300710 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:16:25.303587 master-0 kubenswrapper[17411]: I0223 13:16:25.303538 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:16:25.304474 master-0 kubenswrapper[17411]: I0223 13:16:25.304419 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"38b7ce474df02ea287eb02ea513a627a","Type":"ContainerStarted","Data":"1b5f99f63dd002feaf41abedc78477cbb67500c7fee6071e3fdb7a32dbad49a8"} Feb 23 13:16:25.305962 master-0 kubenswrapper[17411]: I0223 13:16:25.305919 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.306777 master-0 kubenswrapper[17411]: I0223 13:16:25.306728 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.307367 master-0 kubenswrapper[17411]: I0223 13:16:25.307314 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.307836 master-0 kubenswrapper[17411]: I0223 13:16:25.307808 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.308428 master-0 kubenswrapper[17411]: I0223 13:16:25.308388 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.309035 master-0 kubenswrapper[17411]: I0223 13:16:25.309000 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.309578 master-0 kubenswrapper[17411]: I0223 13:16:25.309538 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.310181 master-0 kubenswrapper[17411]: I0223 13:16:25.310147 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.310958 master-0 kubenswrapper[17411]: I0223 13:16:25.310898 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.312009 master-0 kubenswrapper[17411]: I0223 13:16:25.311976 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.312607 master-0 kubenswrapper[17411]: I0223 13:16:25.312580 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.313257 master-0 kubenswrapper[17411]: I0223 13:16:25.313184 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.314025 master-0 kubenswrapper[17411]: I0223 13:16:25.313982 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.314963 master-0 kubenswrapper[17411]: I0223 13:16:25.314925 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.315465 master-0 kubenswrapper[17411]: I0223 13:16:25.315436 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.316079 master-0 kubenswrapper[17411]: I0223 13:16:25.316042 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.316682 master-0 kubenswrapper[17411]: I0223 13:16:25.316652 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.317153 master-0 kubenswrapper[17411]: I0223 13:16:25.317116 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.317953 master-0 kubenswrapper[17411]: I0223 13:16:25.317572 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.318194 master-0 kubenswrapper[17411]: I0223 13:16:25.318156 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.318747 master-0 kubenswrapper[17411]: I0223 13:16:25.318669 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.614519 master-0 kubenswrapper[17411]: E0223 13:16:25.614236 17411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1896e285e6df5cea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:afeec80f2ec1ff5cb32c2367912befef,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container: startup-monitor,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-23 13:16:08.148794602 +0000 UTC m=+561.576301199,LastTimestamp:2026-02-23 13:16:08.148794602 +0000 UTC m=+561.576301199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 23 13:16:25.869700 master-0 kubenswrapper[17411]: I0223 13:16:25.869557 17411 scope.go:117] "RemoveContainer" containerID="0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a" Feb 23 13:16:25.870072 master-0 kubenswrapper[17411]: E0223 13:16:25.869992 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:16:25.870266 master-0 kubenswrapper[17411]: I0223 13:16:25.870170 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.870910 master-0 kubenswrapper[17411]: I0223 13:16:25.870854 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.871720 master-0 kubenswrapper[17411]: I0223 13:16:25.871671 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.872319 master-0 kubenswrapper[17411]: I0223 13:16:25.872214 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.872888 master-0 kubenswrapper[17411]: I0223 13:16:25.872835 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.873656 master-0 kubenswrapper[17411]: I0223 13:16:25.873616 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.874236 master-0 kubenswrapper[17411]: I0223 13:16:25.874185 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.874744 master-0 kubenswrapper[17411]: I0223 13:16:25.874691 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.875314 master-0 kubenswrapper[17411]: I0223 13:16:25.875221 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.875782 master-0 kubenswrapper[17411]: I0223 13:16:25.875738 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.876276 master-0 kubenswrapper[17411]: I0223 13:16:25.876211 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.876793 master-0 kubenswrapper[17411]: I0223 13:16:25.876711 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.877388 master-0 kubenswrapper[17411]: I0223 13:16:25.877307 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.877767 master-0 kubenswrapper[17411]: I0223 13:16:25.877724 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.878561 master-0 kubenswrapper[17411]: I0223 13:16:25.878122 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.878681 master-0 kubenswrapper[17411]: I0223 13:16:25.878565 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.879064 master-0 kubenswrapper[17411]: I0223 13:16:25.879010 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.879490 master-0 kubenswrapper[17411]: I0223 13:16:25.879445 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.879863 master-0 kubenswrapper[17411]: I0223 13:16:25.879818 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.880513 master-0 kubenswrapper[17411]: I0223 13:16:25.880461 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.881393 master-0 kubenswrapper[17411]: I0223 13:16:25.881343 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:25.937276 master-0 kubenswrapper[17411]: I0223 13:16:25.937115 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:16:25.937505 master-0 kubenswrapper[17411]: I0223 13:16:25.937461 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:16:26.304580 master-0 kubenswrapper[17411]: I0223 13:16:26.304521 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:26.305119 master-0 kubenswrapper[17411]: I0223 13:16:26.304640 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:26.314976 master-0 kubenswrapper[17411]: I0223 13:16:26.314933 17411 generic.go:334] "Generic (PLEG): container finished" podID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" containerID="5db2f0540bf3595f6491e89d67843156a4d64e6dce1fba55ec53b1c3ad371af1" exitCode=0 Feb 23 13:16:26.315114 master-0 kubenswrapper[17411]: I0223 13:16:26.315040 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" event={"ID":"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd","Type":"ContainerDied","Data":"5db2f0540bf3595f6491e89d67843156a4d64e6dce1fba55ec53b1c3ad371af1"} Feb 23 13:16:26.316033 master-0 kubenswrapper[17411]: I0223 13:16:26.315991 17411 scope.go:117] "RemoveContainer" containerID="5db2f0540bf3595f6491e89d67843156a4d64e6dce1fba55ec53b1c3ad371af1" Feb 23 13:16:26.316464 master-0 kubenswrapper[17411]: I0223 13:16:26.316427 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.317542 master-0 kubenswrapper[17411]: I0223 13:16:26.317486 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.318441 master-0 kubenswrapper[17411]: I0223 13:16:26.318125 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-zkmdz_8db940c1-82ba-4b6e-8137-059e26ab1ced/machine-api-operator/0.log" Feb 23 13:16:26.318575 master-0 kubenswrapper[17411]: I0223 13:16:26.318471 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.320323 master-0 kubenswrapper[17411]: I0223 13:16:26.319873 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.320323 master-0 kubenswrapper[17411]: I0223 13:16:26.320270 17411 generic.go:334] "Generic (PLEG): container finished" podID="8db940c1-82ba-4b6e-8137-059e26ab1ced" containerID="c10ab2ee9ebfa349f56fe76937a41bcc4073bbb1da67ba666a8653aa33c15175" exitCode=255 Feb 23 13:16:26.320402 master-0 kubenswrapper[17411]: I0223 13:16:26.320350 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" event={"ID":"8db940c1-82ba-4b6e-8137-059e26ab1ced","Type":"ContainerDied","Data":"c10ab2ee9ebfa349f56fe76937a41bcc4073bbb1da67ba666a8653aa33c15175"} Feb 23 13:16:26.320802 master-0 kubenswrapper[17411]: I0223 13:16:26.320770 17411 scope.go:117] "RemoveContainer" containerID="c10ab2ee9ebfa349f56fe76937a41bcc4073bbb1da67ba666a8653aa33c15175" Feb 23 13:16:26.321118 master-0 kubenswrapper[17411]: I0223 13:16:26.321061 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.322098 master-0 kubenswrapper[17411]: I0223 13:16:26.322038 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.323464 master-0 kubenswrapper[17411]: I0223 13:16:26.323405 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.324129 master-0 kubenswrapper[17411]: I0223 13:16:26.324079 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.324383 master-0 kubenswrapper[17411]: I0223 13:16:26.324355 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/1.log" Feb 23 13:16:26.324437 master-0 kubenswrapper[17411]: I0223 13:16:26.324395 17411 generic.go:334] "Generic (PLEG): container finished" podID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerID="28759b105ef16fc9766c38f67df6c142da73e18661733246b760f77ad371c2c7" exitCode=0 Feb 23 13:16:26.324655 master-0 kubenswrapper[17411]: I0223 13:16:26.324602 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerDied","Data":"28759b105ef16fc9766c38f67df6c142da73e18661733246b760f77ad371c2c7"} Feb 23 13:16:26.324720 master-0 kubenswrapper[17411]: I0223 13:16:26.324699 17411 scope.go:117] "RemoveContainer" containerID="548c2b6ddec877e25587f0b887e8188520ed011da1cb3c86a39995da4b475367" Feb 23 13:16:26.324929 master-0 kubenswrapper[17411]: I0223 13:16:26.324877 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.325054 master-0 kubenswrapper[17411]: I0223 13:16:26.325032 17411 scope.go:117] "RemoveContainer" containerID="28759b105ef16fc9766c38f67df6c142da73e18661733246b760f77ad371c2c7" Feb 23 13:16:26.326494 master-0 kubenswrapper[17411]: I0223 13:16:26.326432 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.327158 master-0 kubenswrapper[17411]: I0223 13:16:26.327102 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.328364 master-0 kubenswrapper[17411]: I0223 13:16:26.328304 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.329079 master-0 kubenswrapper[17411]: I0223 13:16:26.329027 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.330060 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.330692 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.331100 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.332665 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.333407 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.333791 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.334133 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.334701 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.335351 master-0 kubenswrapper[17411]: I0223 13:16:26.335045 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.336143 master-0 kubenswrapper[17411]: I0223 13:16:26.335576 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.337336 master-0 kubenswrapper[17411]: I0223 13:16:26.336263 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.339753 master-0 kubenswrapper[17411]: I0223 13:16:26.339656 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.340774 master-0 kubenswrapper[17411]: I0223 13:16:26.340692 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.341599 master-0 kubenswrapper[17411]: I0223 13:16:26.341532 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.342444 master-0 kubenswrapper[17411]: I0223 13:16:26.342384 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.343157 master-0 kubenswrapper[17411]: I0223 13:16:26.343091 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.344383 master-0 kubenswrapper[17411]: I0223 13:16:26.344313 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.345434 master-0 kubenswrapper[17411]: I0223 13:16:26.345301 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.346315 master-0 kubenswrapper[17411]: I0223 13:16:26.346233 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.347069 master-0 kubenswrapper[17411]: I0223 13:16:26.346994 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.348083 master-0 kubenswrapper[17411]: I0223 13:16:26.347997 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.349052 master-0 kubenswrapper[17411]: I0223 13:16:26.348947 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.350007 master-0 kubenswrapper[17411]: I0223 13:16:26.349924 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.350810 master-0 kubenswrapper[17411]: I0223 13:16:26.350749 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.351621 master-0 kubenswrapper[17411]: I0223 13:16:26.351560 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.352557 master-0 kubenswrapper[17411]: I0223 13:16:26.352479 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.353311 master-0 kubenswrapper[17411]: I0223 13:16:26.353184 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.354116 master-0 kubenswrapper[17411]: I0223 13:16:26.354034 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.354837 master-0 kubenswrapper[17411]: I0223 13:16:26.354769 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.355561 master-0 kubenswrapper[17411]: I0223 13:16:26.355486 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.356416 master-0 kubenswrapper[17411]: I0223 13:16:26.356352 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.357779 master-0 kubenswrapper[17411]: I0223 13:16:26.357706 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.358463 master-0 kubenswrapper[17411]: I0223 13:16:26.358398 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.874739 master-0 kubenswrapper[17411]: I0223 13:16:26.874497 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.875540 master-0 kubenswrapper[17411]: I0223 13:16:26.875452 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.876444 master-0 kubenswrapper[17411]: I0223 13:16:26.876355 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.877721 master-0 kubenswrapper[17411]: I0223 13:16:26.877611 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.878799 master-0 kubenswrapper[17411]: I0223 13:16:26.878729 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.879975 master-0 kubenswrapper[17411]: I0223 13:16:26.879836 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.881396 master-0 kubenswrapper[17411]: I0223 13:16:26.881310 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.882601 master-0 kubenswrapper[17411]: I0223 13:16:26.882529 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.883545 master-0 kubenswrapper[17411]: I0223 13:16:26.883468 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.884395 master-0 kubenswrapper[17411]: I0223 13:16:26.884316 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.885179 master-0 kubenswrapper[17411]: I0223 13:16:26.885110 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.886067 master-0 kubenswrapper[17411]: I0223 13:16:26.885988 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.886821 master-0 kubenswrapper[17411]: I0223 13:16:26.886748 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.887688 master-0 kubenswrapper[17411]: I0223 13:16:26.887603 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.888453 master-0 kubenswrapper[17411]: I0223 13:16:26.888388 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.889208 master-0 kubenswrapper[17411]: I0223 13:16:26.889141 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.890041 master-0 kubenswrapper[17411]: I0223 13:16:26.889941 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.890857 master-0 kubenswrapper[17411]: I0223 13:16:26.890798 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.891872 master-0 kubenswrapper[17411]: I0223 13:16:26.891813 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.892665 master-0 kubenswrapper[17411]: I0223 13:16:26.892608 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.893641 master-0 kubenswrapper[17411]: I0223 13:16:26.893575 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.894444 master-0 kubenswrapper[17411]: I0223 13:16:26.894380 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.895049 master-0 kubenswrapper[17411]: I0223 13:16:26.894978 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.895519 master-0 kubenswrapper[17411]: I0223 13:16:26.895469 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:26.918364 master-0 kubenswrapper[17411]: I0223 13:16:26.918239 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:26.918559 master-0 kubenswrapper[17411]: I0223 13:16:26.918373 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:27.305555 master-0 kubenswrapper[17411]: I0223 13:16:27.305477 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:27.306063 master-0 kubenswrapper[17411]: I0223 13:16:27.305567 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:27.336694 master-0 kubenswrapper[17411]: I0223 13:16:27.336136 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" event={"ID":"fc576a63-0ea6-40c8-90bc-c44b5dc95ecd","Type":"ContainerStarted","Data":"8417bb36416159959be819b65d229aa80e4bb00c35994f30a6bb0c3afaca31b4"} Feb 23 13:16:27.339833 master-0 kubenswrapper[17411]: I0223 13:16:27.339628 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.340389 master-0 kubenswrapper[17411]: I0223 13:16:27.340337 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.341428 master-0 kubenswrapper[17411]: I0223 13:16:27.341320 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.342172 master-0 kubenswrapper[17411]: I0223 13:16:27.342089 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.343129 master-0 kubenswrapper[17411]: I0223 13:16:27.343019 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.343806 master-0 kubenswrapper[17411]: I0223 13:16:27.343699 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.344660 master-0 kubenswrapper[17411]: I0223 13:16:27.344551 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.345542 master-0 kubenswrapper[17411]: I0223 13:16:27.345471 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.346320 master-0 kubenswrapper[17411]: I0223 13:16:27.346178 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.347274 master-0 kubenswrapper[17411]: I0223 13:16:27.347170 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.348331 master-0 kubenswrapper[17411]: I0223 13:16:27.348228 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.349412 master-0 kubenswrapper[17411]: I0223 13:16:27.349361 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.350590 master-0 kubenswrapper[17411]: I0223 13:16:27.350518 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.350738 master-0 kubenswrapper[17411]: I0223 13:16:27.350703 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-zkmdz_8db940c1-82ba-4b6e-8137-059e26ab1ced/machine-api-operator/0.log" Feb 23 13:16:27.351686 master-0 kubenswrapper[17411]: I0223 13:16:27.351617 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.351818 master-0 kubenswrapper[17411]: I0223 13:16:27.351764 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" event={"ID":"8db940c1-82ba-4b6e-8137-059e26ab1ced","Type":"ContainerStarted","Data":"a39ce12dfd0b664227673ec01f49cf83cb5e12f42c9500675bb789a359eb50ba"} Feb 23 13:16:27.352335 master-0 kubenswrapper[17411]: I0223 13:16:27.352275 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.353115 master-0 kubenswrapper[17411]: I0223 13:16:27.353060 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.353768 master-0 kubenswrapper[17411]: I0223 13:16:27.353701 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.354715 master-0 kubenswrapper[17411]: I0223 13:16:27.354670 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3"} Feb 23 13:16:27.354715 master-0 kubenswrapper[17411]: I0223 13:16:27.354678 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.355549 master-0 kubenswrapper[17411]: I0223 13:16:27.355488 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.356297 master-0 kubenswrapper[17411]: I0223 13:16:27.356196 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.357491 master-0 kubenswrapper[17411]: I0223 13:16:27.357416 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.358419 master-0 kubenswrapper[17411]: I0223 13:16:27.358345 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.359215 master-0 kubenswrapper[17411]: I0223 13:16:27.359144 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.359995 master-0 kubenswrapper[17411]: I0223 13:16:27.359920 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.360887 master-0 kubenswrapper[17411]: I0223 13:16:27.360814 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.361558 master-0 kubenswrapper[17411]: I0223 13:16:27.361501 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.362441 master-0 kubenswrapper[17411]: I0223 13:16:27.362164 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.362888 master-0 kubenswrapper[17411]: I0223 13:16:27.362811 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.363620 master-0 kubenswrapper[17411]: I0223 13:16:27.363574 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.364269 master-0 kubenswrapper[17411]: I0223 13:16:27.364185 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.365159 master-0 kubenswrapper[17411]: I0223 13:16:27.365075 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.365875 master-0 kubenswrapper[17411]: I0223 13:16:27.365805 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.366591 master-0 kubenswrapper[17411]: I0223 13:16:27.366504 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.367148 master-0 kubenswrapper[17411]: I0223 13:16:27.367099 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.367790 master-0 kubenswrapper[17411]: I0223 13:16:27.367724 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.368478 master-0 kubenswrapper[17411]: I0223 13:16:27.368420 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.369129 master-0 kubenswrapper[17411]: I0223 13:16:27.369037 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.369854 master-0 kubenswrapper[17411]: I0223 13:16:27.369778 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.370552 master-0 kubenswrapper[17411]: I0223 13:16:27.370475 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.371357 master-0 kubenswrapper[17411]: I0223 13:16:27.371229 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.371888 master-0 kubenswrapper[17411]: I0223 13:16:27.371801 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.372456 master-0 kubenswrapper[17411]: I0223 13:16:27.372384 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.372919 master-0 kubenswrapper[17411]: I0223 13:16:27.372873 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.373565 master-0 kubenswrapper[17411]: I0223 13:16:27.373505 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.374130 master-0 kubenswrapper[17411]: I0223 13:16:27.374057 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.374729 master-0 kubenswrapper[17411]: I0223 13:16:27.374668 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.391695 master-0 kubenswrapper[17411]: I0223 13:16:27.391605 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.411763 master-0 kubenswrapper[17411]: I0223 13:16:27.411676 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:27.554881 master-0 kubenswrapper[17411]: I0223 13:16:27.554774 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:27.554881 master-0 kubenswrapper[17411]: I0223 13:16:27.554851 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:27.555233 master-0 kubenswrapper[17411]: I0223 13:16:27.554890 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:27.555233 master-0 kubenswrapper[17411]: I0223 13:16:27.554884 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:28.937788 master-0 kubenswrapper[17411]: I0223 13:16:28.937688 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:28.937788 master-0 kubenswrapper[17411]: I0223 13:16:28.937786 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:29.918312 master-0 kubenswrapper[17411]: I0223 13:16:29.918215 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:29.918535 master-0 kubenswrapper[17411]: I0223 13:16:29.918342 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:29.918535 master-0 kubenswrapper[17411]: I0223 13:16:29.918429 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:29.918535 master-0 kubenswrapper[17411]: I0223 13:16:29.918453 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:31.586828 master-0 kubenswrapper[17411]: E0223 13:16:31.586746 17411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 23 13:16:32.604276 master-0 kubenswrapper[17411]: I0223 13:16:32.604149 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" Feb 23 13:16:32.606100 master-0 kubenswrapper[17411]: I0223 13:16:32.605988 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.607032 master-0 kubenswrapper[17411]: I0223 13:16:32.606976 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.607865 master-0 kubenswrapper[17411]: I0223 13:16:32.607796 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.608466 master-0 kubenswrapper[17411]: I0223 13:16:32.608408 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.609126 master-0 kubenswrapper[17411]: I0223 13:16:32.609079 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.609837 master-0 kubenswrapper[17411]: I0223 13:16:32.609735 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.610409 master-0 kubenswrapper[17411]: I0223 13:16:32.610358 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.611034 master-0 kubenswrapper[17411]: I0223 13:16:32.610977 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.611610 master-0 kubenswrapper[17411]: I0223 13:16:32.611555 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.612171 master-0 kubenswrapper[17411]: I0223 13:16:32.612117 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.612793 master-0 kubenswrapper[17411]: I0223 13:16:32.612752 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.613490 master-0 kubenswrapper[17411]: I0223 13:16:32.613436 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.614104 master-0 kubenswrapper[17411]: I0223 13:16:32.614061 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.614709 master-0 kubenswrapper[17411]: I0223 13:16:32.614658 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.615538 master-0 kubenswrapper[17411]: I0223 13:16:32.615460 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.616208 master-0 kubenswrapper[17411]: I0223 13:16:32.616162 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.616861 master-0 kubenswrapper[17411]: I0223 13:16:32.616809 17411 status_manager.go:851] "Failed to get status for pod" podUID="da5d5997-e45f-4858-a9a9-e880bc222caf" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-8tzms\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.618333 master-0 kubenswrapper[17411]: I0223 13:16:32.617650 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.618456 master-0 kubenswrapper[17411]: I0223 13:16:32.618416 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.619085 master-0 kubenswrapper[17411]: I0223 13:16:32.619034 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.619856 master-0 kubenswrapper[17411]: I0223 13:16:32.619785 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.620871 master-0 kubenswrapper[17411]: I0223 13:16:32.620792 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.621814 master-0 kubenswrapper[17411]: I0223 13:16:32.621718 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.623049 master-0 kubenswrapper[17411]: I0223 13:16:32.622963 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.623885 master-0 kubenswrapper[17411]: I0223 13:16:32.623822 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.868621 master-0 kubenswrapper[17411]: I0223 13:16:32.868438 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:32.870305 master-0 kubenswrapper[17411]: I0223 13:16:32.870182 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.871552 master-0 kubenswrapper[17411]: I0223 13:16:32.871380 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.872599 master-0 kubenswrapper[17411]: I0223 13:16:32.872527 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.873812 master-0 kubenswrapper[17411]: I0223 13:16:32.873746 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.874839 master-0 kubenswrapper[17411]: I0223 13:16:32.874778 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.876150 master-0 kubenswrapper[17411]: I0223 13:16:32.876068 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.877121 master-0 kubenswrapper[17411]: I0223 13:16:32.877045 17411 status_manager.go:851] "Failed to get status for pod" podUID="da5d5997-e45f-4858-a9a9-e880bc222caf" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-8tzms\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.878176 master-0 kubenswrapper[17411]: I0223 13:16:32.878087 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.879143 master-0 kubenswrapper[17411]: I0223 13:16:32.879062 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.880016 master-0 kubenswrapper[17411]: I0223 13:16:32.879952 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.881180 master-0 kubenswrapper[17411]: I0223 13:16:32.881107 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.882179 master-0 kubenswrapper[17411]: I0223 13:16:32.882073 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.882885 master-0 kubenswrapper[17411]: I0223 13:16:32.882839 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.883688 master-0 kubenswrapper[17411]: I0223 13:16:32.883642 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.884599 master-0 kubenswrapper[17411]: I0223 13:16:32.884531 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.885418 master-0 kubenswrapper[17411]: I0223 13:16:32.885366 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.886449 master-0 kubenswrapper[17411]: I0223 13:16:32.886389 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.887361 master-0 kubenswrapper[17411]: I0223 13:16:32.887317 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.888264 master-0 kubenswrapper[17411]: I0223 13:16:32.888161 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.889015 master-0 kubenswrapper[17411]: I0223 13:16:32.888951 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.889915 master-0 kubenswrapper[17411]: I0223 13:16:32.889838 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.890854 master-0 kubenswrapper[17411]: I0223 13:16:32.890792 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.892017 master-0 kubenswrapper[17411]: I0223 13:16:32.891957 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.893350 master-0 kubenswrapper[17411]: I0223 13:16:32.893292 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.894524 master-0 kubenswrapper[17411]: I0223 13:16:32.894478 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:32.896299 master-0 kubenswrapper[17411]: I0223 13:16:32.896209 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:32.896299 master-0 kubenswrapper[17411]: I0223 13:16:32.896292 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:32.897520 master-0 kubenswrapper[17411]: E0223 13:16:32.897437 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:32.898345 master-0 kubenswrapper[17411]: I0223 13:16:32.898306 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:32.918099 master-0 kubenswrapper[17411]: I0223 13:16:32.917920 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:32.918099 master-0 kubenswrapper[17411]: I0223 13:16:32.918067 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:32.918391 master-0 kubenswrapper[17411]: I0223 13:16:32.917943 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:32.918391 master-0 kubenswrapper[17411]: I0223 13:16:32.918163 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:32.918391 master-0 kubenswrapper[17411]: I0223 13:16:32.918218 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:32.919591 master-0 kubenswrapper[17411]: I0223 13:16:32.919526 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 23 13:16:32.919725 master-0 kubenswrapper[17411]: I0223 13:16:32.919615 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" containerID="cri-o://b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38" gracePeriod=30 Feb 23 13:16:32.930045 master-0 kubenswrapper[17411]: I0223 13:16:32.929636 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": read tcp 10.128.0.2:40834->10.128.0.12:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:32.930045 master-0 kubenswrapper[17411]: I0223 13:16:32.929798 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": read tcp 10.128.0.2:40834->10.128.0.12:8443: read: connection reset by peer" Feb 23 13:16:33.411743 master-0 kubenswrapper[17411]: I0223 13:16:33.411570 17411 generic.go:334] "Generic (PLEG): container finished" podID="959c75833224b4ba3fa488b77d8f5032" containerID="1ccd0d66efb6fc1017d9ff7c176c9ee040c1b848e55b7965ec1f33d638df12be" exitCode=0 Feb 23 13:16:33.411743 master-0 kubenswrapper[17411]: I0223 13:16:33.411673 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerDied","Data":"1ccd0d66efb6fc1017d9ff7c176c9ee040c1b848e55b7965ec1f33d638df12be"} Feb 23 13:16:33.411743 master-0 kubenswrapper[17411]: I0223 13:16:33.411711 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"1ce50d0a9b851de04c7b382e9908daa2b215aeb092ef24bcbda5a1d60b1862a1"} Feb 23 13:16:33.412037 master-0 kubenswrapper[17411]: I0223 13:16:33.412020 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:33.412082 master-0 kubenswrapper[17411]: I0223 13:16:33.412038 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:33.413013 master-0 kubenswrapper[17411]: E0223 13:16:33.412972 17411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:33.413262 master-0 kubenswrapper[17411]: I0223 13:16:33.413134 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.413998 master-0 kubenswrapper[17411]: I0223 13:16:33.413971 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.414948 master-0 kubenswrapper[17411]: I0223 13:16:33.414551 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.415321 master-0 kubenswrapper[17411]: I0223 13:16:33.415211 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.416054 master-0 kubenswrapper[17411]: I0223 13:16:33.416005 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.416668 master-0 kubenswrapper[17411]: I0223 13:16:33.416598 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.416949 master-0 kubenswrapper[17411]: I0223 13:16:33.416925 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/3.log" Feb 23 13:16:33.417623 master-0 kubenswrapper[17411]: I0223 13:16:33.417289 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.417957 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.417990 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38" exitCode=255 Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.418039 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38"} Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.418074 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93"} Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.418095 17411 scope.go:117] "RemoveContainer" containerID="1d00be7013db5f4871f8f9fcca38d13b794aeb731da6878ede81daa395d911d9" Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.418480 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:33.419090 master-0 kubenswrapper[17411]: I0223 13:16:33.418726 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.419816 master-0 kubenswrapper[17411]: I0223 13:16:33.419739 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.420487 master-0 kubenswrapper[17411]: I0223 13:16:33.420442 17411 status_manager.go:851] "Failed to get status for pod" podUID="da5d5997-e45f-4858-a9a9-e880bc222caf" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-8tzms\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.421225 master-0 kubenswrapper[17411]: I0223 13:16:33.421188 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.421854 master-0 kubenswrapper[17411]: I0223 13:16:33.421819 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.422387 master-0 kubenswrapper[17411]: I0223 13:16:33.422357 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.422938 master-0 kubenswrapper[17411]: I0223 13:16:33.422864 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.423533 master-0 kubenswrapper[17411]: I0223 13:16:33.423502 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.424064 master-0 kubenswrapper[17411]: I0223 13:16:33.424034 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.424661 master-0 kubenswrapper[17411]: I0223 13:16:33.424629 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.425267 master-0 kubenswrapper[17411]: I0223 13:16:33.425207 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.425945 master-0 kubenswrapper[17411]: I0223 13:16:33.425909 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.426608 master-0 kubenswrapper[17411]: I0223 13:16:33.426576 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.427171 master-0 kubenswrapper[17411]: I0223 13:16:33.427143 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.427730 master-0 kubenswrapper[17411]: I0223 13:16:33.427691 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.428293 master-0 kubenswrapper[17411]: I0223 13:16:33.428264 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.428938 master-0 kubenswrapper[17411]: I0223 13:16:33.428901 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.429739 master-0 kubenswrapper[17411]: I0223 13:16:33.429706 17411 status_manager.go:851] "Failed to get status for pod" podUID="38b7ce474df02ea287eb02ea513a627a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.430329 master-0 kubenswrapper[17411]: I0223 13:16:33.430229 17411 status_manager.go:851] "Failed to get status for pod" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-8586dccc9b-6wk86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.431103 master-0 kubenswrapper[17411]: I0223 13:16:33.431060 17411 status_manager.go:851] "Failed to get status for pod" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-6847bb4785-hgkrm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.431663 master-0 kubenswrapper[17411]: I0223 13:16:33.431632 17411 status_manager.go:851] "Failed to get status for pod" podUID="8db940c1-82ba-4b6e-8137-059e26ab1ced" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-zkmdz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-zkmdz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.432377 master-0 kubenswrapper[17411]: I0223 13:16:33.432304 17411 status_manager.go:851] "Failed to get status for pod" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.432919 master-0 kubenswrapper[17411]: I0223 13:16:33.432883 17411 status_manager.go:851] "Failed to get status for pod" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-8mxs2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.433414 master-0 kubenswrapper[17411]: I0223 13:16:33.433383 17411 status_manager.go:851] "Failed to get status for pod" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-c48c8bf7c-rvccp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.434007 master-0 kubenswrapper[17411]: I0223 13:16:33.433977 17411 status_manager.go:851] "Failed to get status for pod" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-6fb4df594f-sx924\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.434605 master-0 kubenswrapper[17411]: I0223 13:16:33.434576 17411 status_manager.go:851] "Failed to get status for pod" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-576b4d78bd-nds57\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.435113 master-0 kubenswrapper[17411]: I0223 13:16:33.435073 17411 status_manager.go:851] "Failed to get status for pod" podUID="afeec80f2ec1ff5cb32c2367912befef" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.435703 master-0 kubenswrapper[17411]: I0223 13:16:33.435673 17411 status_manager.go:851] "Failed to get status for pod" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-545bf96f4d-drk2j\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.436181 master-0 kubenswrapper[17411]: I0223 13:16:33.436152 17411 status_manager.go:851] "Failed to get status for pod" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods/kube-storage-version-migrator-operator-fc889cfd5-ccvpn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.437139 master-0 kubenswrapper[17411]: I0223 13:16:33.437084 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-6f47d587d6-p5488\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.437937 master-0 kubenswrapper[17411]: I0223 13:16:33.437890 17411 status_manager.go:851] "Failed to get status for pod" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-78784b9d57-r4sf8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.438653 master-0 kubenswrapper[17411]: I0223 13:16:33.438613 17411 status_manager.go:851] "Failed to get status for pod" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7bcfbc574b-jpf5n\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.439709 master-0 kubenswrapper[17411]: I0223 13:16:33.439610 17411 status_manager.go:851] "Failed to get status for pod" podUID="da5d5997-e45f-4858-a9a9-e880bc222caf" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tzms" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c75f78c8b-8tzms\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.440724 master-0 kubenswrapper[17411]: I0223 13:16:33.440643 17411 status_manager.go:851] "Failed to get status for pod" podUID="c2c8336c-0733-4e20-85ec-062e07b6fdc0" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-p9r9b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-54cb48566c-p9r9b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.441514 master-0 kubenswrapper[17411]: I0223 13:16:33.441449 17411 status_manager.go:851] "Failed to get status for pod" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-5bd7c86784-ld4gj\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.442382 master-0 kubenswrapper[17411]: I0223 13:16:33.442329 17411 status_manager.go:851] "Failed to get status for pod" podUID="c33f208a-e158-47e2-83d5-ac792bf3a1d5" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-82h6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-7f8c75f984-82h6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.443166 master-0 kubenswrapper[17411]: I0223 13:16:33.443109 17411 status_manager.go:851] "Failed to get status for pod" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5df5ffc47c-zwmzz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.444013 master-0 kubenswrapper[17411]: I0223 13:16:33.443945 17411 status_manager.go:851] "Failed to get status for pod" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-5d87bf58c-dgldn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.444775 master-0 kubenswrapper[17411]: I0223 13:16:33.444722 17411 status_manager.go:851] "Failed to get status for pod" podUID="85958edf-e3da-4704-8f09-cf049101f2e6" pod="openshift-network-operator/network-operator-7d7db75979-rmsq8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-7d7db75979-rmsq8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.445507 master-0 kubenswrapper[17411]: I0223 13:16:33.445455 17411 status_manager.go:851] "Failed to get status for pod" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-584cc7bcb5-t9gx8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.446369 master-0 kubenswrapper[17411]: I0223 13:16:33.446296 17411 status_manager.go:851] "Failed to get status for pod" podUID="fc576a63-0ea6-40c8-90bc-c44b5dc95ecd" pod="openshift-cluster-version/cluster-version-operator-57476485-j4p78" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-57476485-j4p78\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.447182 master-0 kubenswrapper[17411]: I0223 13:16:33.447122 17411 status_manager.go:851] "Failed to get status for pod" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-ck859\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 23 13:16:33.519473 master-0 kubenswrapper[17411]: I0223 13:16:33.519381 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:33.519669 master-0 kubenswrapper[17411]: I0223 13:16:33.519480 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:34.433983 master-0 kubenswrapper[17411]: I0223 13:16:34.433910 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/3.log" Feb 23 13:16:34.451433 master-0 kubenswrapper[17411]: I0223 13:16:34.451368 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"c87147a4890661b2f7c15d9641dc954d9d696c88a05d2b50a5bc7bbc4de4fd51"} Feb 23 13:16:34.451433 master-0 kubenswrapper[17411]: I0223 13:16:34.451422 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"0bfe2991265cd588abcaf8d0b2af43bf522379cacbac29b26444a0a05d8a31b1"} Feb 23 13:16:34.451433 master-0 kubenswrapper[17411]: I0223 13:16:34.451434 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"7b6cc5be5905ae7f4816b017841fa7b3fcf14727394d0d519f454d37363136d4"} Feb 23 13:16:35.482139 master-0 kubenswrapper[17411]: I0223 13:16:35.482051 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"0362eff4d622e8da84bef8c367ec2f348346a9c774e282f1c62b337838da0ed4"} Feb 23 13:16:35.482139 master-0 kubenswrapper[17411]: I0223 13:16:35.482131 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"959c75833224b4ba3fa488b77d8f5032","Type":"ContainerStarted","Data":"aa9417a69b3d8534fa7fbe4b07141243e626f791a17923e9d8b54134a737639f"} Feb 23 13:16:35.482699 master-0 kubenswrapper[17411]: I0223 13:16:35.482502 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:35.482699 master-0 kubenswrapper[17411]: I0223 13:16:35.482522 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:35.482855 master-0 kubenswrapper[17411]: I0223 13:16:35.482829 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:37.501336 master-0 kubenswrapper[17411]: I0223 13:16:37.501239 17411 generic.go:334] "Generic (PLEG): container finished" podID="d32952be-0fe3-431f-aa8f-6a35159fa845" containerID="e36049120c7b7a1b6f305f409b9f243014dca1a45ca5d0d44a737b2995cef2d6" exitCode=0 Feb 23 13:16:37.502040 master-0 kubenswrapper[17411]: I0223 13:16:37.501319 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" event={"ID":"d32952be-0fe3-431f-aa8f-6a35159fa845","Type":"ContainerDied","Data":"e36049120c7b7a1b6f305f409b9f243014dca1a45ca5d0d44a737b2995cef2d6"} Feb 23 13:16:37.502226 master-0 kubenswrapper[17411]: I0223 13:16:37.502187 17411 scope.go:117] "RemoveContainer" containerID="e36049120c7b7a1b6f305f409b9f243014dca1a45ca5d0d44a737b2995cef2d6" Feb 23 13:16:37.554749 master-0 kubenswrapper[17411]: I0223 13:16:37.554663 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:37.554944 master-0 kubenswrapper[17411]: I0223 13:16:37.554764 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:37.554944 master-0 kubenswrapper[17411]: I0223 13:16:37.554888 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:37.555017 master-0 kubenswrapper[17411]: I0223 13:16:37.554785 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:37.555056 master-0 kubenswrapper[17411]: I0223 13:16:37.555039 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:16:37.556230 master-0 kubenswrapper[17411]: I0223 13:16:37.556176 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"7cad404ca76efda43343352d885646b7d9999a244c40ac96a495b9212da0c05b"} pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 23 13:16:37.556324 master-0 kubenswrapper[17411]: I0223 13:16:37.556289 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" containerID="cri-o://7cad404ca76efda43343352d885646b7d9999a244c40ac96a495b9212da0c05b" gracePeriod=30 Feb 23 13:16:37.572831 master-0 kubenswrapper[17411]: I0223 13:16:37.572744 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:36152->10.128.0.77:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:37.572951 master-0 kubenswrapper[17411]: I0223 13:16:37.572878 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": read tcp 10.128.0.2:36152->10.128.0.77:8443: read: connection reset by peer" Feb 23 13:16:37.899503 master-0 kubenswrapper[17411]: I0223 13:16:37.899426 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:37.899503 master-0 kubenswrapper[17411]: I0223 13:16:37.899506 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:37.904913 master-0 kubenswrapper[17411]: I0223 13:16:37.904871 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:38.512711 master-0 kubenswrapper[17411]: I0223 13:16:38.512653 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/3.log" Feb 23 13:16:38.513515 master-0 kubenswrapper[17411]: I0223 13:16:38.513479 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/2.log" Feb 23 13:16:38.513618 master-0 kubenswrapper[17411]: I0223 13:16:38.513577 17411 generic.go:334] "Generic (PLEG): container finished" podID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerID="7cad404ca76efda43343352d885646b7d9999a244c40ac96a495b9212da0c05b" exitCode=255 Feb 23 13:16:38.513734 master-0 kubenswrapper[17411]: I0223 13:16:38.513677 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerDied","Data":"7cad404ca76efda43343352d885646b7d9999a244c40ac96a495b9212da0c05b"} Feb 23 13:16:38.513809 master-0 kubenswrapper[17411]: I0223 13:16:38.513760 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" event={"ID":"679fabb5-a261-402e-b5be-8fe7f0da0ec8","Type":"ContainerStarted","Data":"c36d3ec1920486f4a5a95657eafd70e3db233a5253429462ceccb640935250ba"} Feb 23 13:16:38.513809 master-0 kubenswrapper[17411]: I0223 13:16:38.513787 17411 scope.go:117] "RemoveContainer" containerID="a9bd4a7b9fb99886adf93bfc960885defd2d234f1a5421f4f3bc1b667090a9fc" Feb 23 13:16:38.514144 master-0 kubenswrapper[17411]: I0223 13:16:38.514109 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:16:38.517123 master-0 kubenswrapper[17411]: I0223 13:16:38.517080 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-gss4v" event={"ID":"d32952be-0fe3-431f-aa8f-6a35159fa845","Type":"ContainerStarted","Data":"67c1688cbf3ff0a56bbe25027dbfe6ca165e2fe785a2525cef41398c9e3132a8"} Feb 23 13:16:38.869547 master-0 kubenswrapper[17411]: I0223 13:16:38.869399 17411 scope.go:117] "RemoveContainer" containerID="0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a" Feb 23 13:16:38.919287 master-0 kubenswrapper[17411]: I0223 13:16:38.919142 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:38.919287 master-0 kubenswrapper[17411]: I0223 13:16:38.919180 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:38.919287 master-0 kubenswrapper[17411]: I0223 13:16:38.919279 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:38.919720 master-0 kubenswrapper[17411]: I0223 13:16:38.919228 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:38.937455 master-0 kubenswrapper[17411]: I0223 13:16:38.937337 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:38.937650 master-0 kubenswrapper[17411]: I0223 13:16:38.937439 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:39.515065 master-0 kubenswrapper[17411]: I0223 13:16:39.514936 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:39.515871 master-0 kubenswrapper[17411]: I0223 13:16:39.515108 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:39.528848 master-0 kubenswrapper[17411]: I0223 13:16:39.528796 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-zwmzz_679fabb5-a261-402e-b5be-8fe7f0da0ec8/console-operator/3.log" Feb 23 13:16:39.531506 master-0 kubenswrapper[17411]: I0223 13:16:39.531463 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/5.log" Feb 23 13:16:39.532316 master-0 kubenswrapper[17411]: I0223 13:16:39.532265 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/4.log" Feb 23 13:16:39.532889 master-0 kubenswrapper[17411]: I0223 13:16:39.532844 17411 generic.go:334] "Generic (PLEG): container finished" podID="16898873-740b-4b85-99cf-d25a28d4ab00" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" exitCode=1 Feb 23 13:16:39.532951 master-0 kubenswrapper[17411]: I0223 13:16:39.532892 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerDied","Data":"72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a"} Feb 23 13:16:39.532951 master-0 kubenswrapper[17411]: I0223 13:16:39.532934 17411 scope.go:117] "RemoveContainer" containerID="0813bfb6e953cd7dccc120a35be8130ef691d39b2802203da3ff37c1fe23401a" Feb 23 13:16:39.533759 master-0 kubenswrapper[17411]: I0223 13:16:39.533687 17411 scope.go:117] "RemoveContainer" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" Feb 23 13:16:39.534659 master-0 kubenswrapper[17411]: E0223 13:16:39.534138 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:16:40.530002 master-0 kubenswrapper[17411]: I0223 13:16:40.529890 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:40.530593 master-0 kubenswrapper[17411]: I0223 13:16:40.530022 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:40.537014 master-0 kubenswrapper[17411]: I0223 13:16:40.536953 17411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:40.540830 master-0 kubenswrapper[17411]: I0223 13:16:40.540784 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/5.log" Feb 23 13:16:40.542811 master-0 kubenswrapper[17411]: I0223 13:16:40.542778 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/2.log" Feb 23 13:16:40.543286 master-0 kubenswrapper[17411]: I0223 13:16:40.543256 17411 generic.go:334] "Generic (PLEG): container finished" podID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" containerID="c46456f1ed6992fcaa7efa9da58c257125d42b7b803815f762f0ce0032f75935" exitCode=255 Feb 23 13:16:40.543376 master-0 kubenswrapper[17411]: I0223 13:16:40.543273 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerDied","Data":"c46456f1ed6992fcaa7efa9da58c257125d42b7b803815f762f0ce0032f75935"} Feb 23 13:16:40.543466 master-0 kubenswrapper[17411]: I0223 13:16:40.543453 17411 scope.go:117] "RemoveContainer" containerID="7ae02e0df64340d5796187bee35b0a226bdb253a9ea0b0f2d5eec150f3a915b5" Feb 23 13:16:40.544082 master-0 kubenswrapper[17411]: I0223 13:16:40.544051 17411 scope.go:117] "RemoveContainer" containerID="c46456f1ed6992fcaa7efa9da58c257125d42b7b803815f762f0ce0032f75935" Feb 23 13:16:40.544380 master-0 kubenswrapper[17411]: E0223 13:16:40.544319 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd-operator pod=etcd-operator-545bf96f4d-drk2j_openshift-etcd-operator(03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4)\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" podUID="03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4" Feb 23 13:16:41.554886 master-0 kubenswrapper[17411]: I0223 13:16:41.554600 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/2.log" Feb 23 13:16:41.556350 master-0 kubenswrapper[17411]: I0223 13:16:41.555087 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:41.556350 master-0 kubenswrapper[17411]: I0223 13:16:41.555109 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:41.561703 master-0 kubenswrapper[17411]: I0223 13:16:41.561508 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:41.602900 master-0 kubenswrapper[17411]: I0223 13:16:41.602813 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="959c75833224b4ba3fa488b77d8f5032" podUID="e02fac60-9feb-469c-9f39-0a6507464db2" Feb 23 13:16:41.917678 master-0 kubenswrapper[17411]: I0223 13:16:41.917496 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:41.918078 master-0 kubenswrapper[17411]: I0223 13:16:41.918018 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:41.918367 master-0 kubenswrapper[17411]: I0223 13:16:41.917617 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:41.918709 master-0 kubenswrapper[17411]: I0223 13:16:41.918640 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:42.563418 master-0 kubenswrapper[17411]: I0223 13:16:42.563355 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:42.563418 master-0 kubenswrapper[17411]: I0223 13:16:42.563401 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="1b7dc343-8f8e-4d77-9c6b-2583f0b86429" Feb 23 13:16:43.515688 master-0 kubenswrapper[17411]: I0223 13:16:43.515591 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:43.515954 master-0 kubenswrapper[17411]: I0223 13:16:43.515706 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:44.918287 master-0 kubenswrapper[17411]: I0223 13:16:44.918087 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:44.919206 master-0 kubenswrapper[17411]: I0223 13:16:44.918272 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:44.919206 master-0 kubenswrapper[17411]: I0223 13:16:44.918362 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:44.919206 master-0 kubenswrapper[17411]: I0223 13:16:44.918269 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:44.919206 master-0 kubenswrapper[17411]: I0223 13:16:44.918486 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:16:44.919754 master-0 kubenswrapper[17411]: I0223 13:16:44.919300 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 23 13:16:44.919754 master-0 kubenswrapper[17411]: I0223 13:16:44.919343 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" containerID="cri-o://67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93" gracePeriod=30 Feb 23 13:16:44.929093 master-0 kubenswrapper[17411]: I0223 13:16:44.929028 17411 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-p5488 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": read tcp 10.128.0.2:44658->10.128.0.12:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:44.929093 master-0 kubenswrapper[17411]: I0223 13:16:44.929089 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": read tcp 10.128.0.2:44658->10.128.0.12:8443: read: connection reset by peer" Feb 23 13:16:45.044160 master-0 kubenswrapper[17411]: E0223 13:16:45.044091 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-p5488_openshift-config-operator(c2b80534-3c9d-4ddb-9215-d50d63294c7c)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" Feb 23 13:16:45.589509 master-0 kubenswrapper[17411]: I0223 13:16:45.589443 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/4.log" Feb 23 13:16:45.590432 master-0 kubenswrapper[17411]: I0223 13:16:45.590392 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/3.log" Feb 23 13:16:45.591217 master-0 kubenswrapper[17411]: I0223 13:16:45.591112 17411 generic.go:334] "Generic (PLEG): container finished" podID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" containerID="67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93" exitCode=255 Feb 23 13:16:45.591217 master-0 kubenswrapper[17411]: I0223 13:16:45.591170 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerDied","Data":"67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93"} Feb 23 13:16:45.591217 master-0 kubenswrapper[17411]: I0223 13:16:45.591213 17411 scope.go:117] "RemoveContainer" containerID="b9c687a3f5c3743ab7129ad40d992c8bb14afad9eb63849349528e53a314cb38" Feb 23 13:16:45.591910 master-0 kubenswrapper[17411]: I0223 13:16:45.591873 17411 scope.go:117] "RemoveContainer" containerID="67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93" Feb 23 13:16:45.592176 master-0 kubenswrapper[17411]: E0223 13:16:45.592128 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-p5488_openshift-config-operator(c2b80534-3c9d-4ddb-9215-d50d63294c7c)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" Feb 23 13:16:45.949177 master-0 kubenswrapper[17411]: I0223 13:16:45.949096 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:16:45.959339 master-0 kubenswrapper[17411]: I0223 13:16:45.958848 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:16:46.603991 master-0 kubenswrapper[17411]: I0223 13:16:46.603952 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/4.log" Feb 23 13:16:46.961210 master-0 kubenswrapper[17411]: I0223 13:16:46.961115 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="959c75833224b4ba3fa488b77d8f5032" podUID="e02fac60-9feb-469c-9f39-0a6507464db2" Feb 23 13:16:47.554973 master-0 kubenswrapper[17411]: I0223 13:16:47.554862 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:47.554973 master-0 kubenswrapper[17411]: I0223 13:16:47.554919 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:47.554973 master-0 kubenswrapper[17411]: I0223 13:16:47.554950 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:47.555395 master-0 kubenswrapper[17411]: I0223 13:16:47.554992 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:50.151777 master-0 kubenswrapper[17411]: I0223 13:16:50.151646 17411 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:16:50.254633 master-0 kubenswrapper[17411]: I0223 13:16:50.254553 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:16:50.592817 master-0 kubenswrapper[17411]: I0223 13:16:50.592728 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-qhbh8" Feb 23 13:16:50.778533 master-0 kubenswrapper[17411]: I0223 13:16:50.778450 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 23 13:16:50.876892 master-0 kubenswrapper[17411]: I0223 13:16:50.876744 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 23 13:16:51.112216 master-0 kubenswrapper[17411]: I0223 13:16:51.112142 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:16:51.183497 master-0 kubenswrapper[17411]: I0223 13:16:51.183374 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:16:51.184427 master-0 kubenswrapper[17411]: I0223 13:16:51.183585 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:16:51.462176 master-0 kubenswrapper[17411]: I0223 13:16:51.462114 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 23 13:16:51.616931 master-0 kubenswrapper[17411]: I0223 13:16:51.616808 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" Feb 23 13:16:51.617764 master-0 kubenswrapper[17411]: I0223 13:16:51.617698 17411 scope.go:117] "RemoveContainer" containerID="c46456f1ed6992fcaa7efa9da58c257125d42b7b803815f762f0ce0032f75935" Feb 23 13:16:51.696622 master-0 kubenswrapper[17411]: I0223 13:16:51.696553 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:16:51.807635 master-0 kubenswrapper[17411]: I0223 13:16:51.807587 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:16:51.973622 master-0 kubenswrapper[17411]: I0223 13:16:51.973469 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 23 13:16:51.986665 master-0 kubenswrapper[17411]: I0223 13:16:51.986630 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-n8vwz" Feb 23 13:16:52.005403 master-0 kubenswrapper[17411]: I0223 13:16:52.005349 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 23 13:16:52.091564 master-0 kubenswrapper[17411]: I0223 13:16:52.090796 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:16:52.123529 master-0 kubenswrapper[17411]: I0223 13:16:52.123477 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:16:52.140934 master-0 kubenswrapper[17411]: I0223 13:16:52.140874 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:16:52.273013 master-0 kubenswrapper[17411]: I0223 13:16:52.272947 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 23 13:16:52.377695 master-0 kubenswrapper[17411]: I0223 13:16:52.377528 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 23 13:16:52.437060 master-0 kubenswrapper[17411]: I0223 13:16:52.436975 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:16:52.458882 master-0 kubenswrapper[17411]: I0223 13:16:52.458796 17411 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:16:52.511080 master-0 kubenswrapper[17411]: I0223 13:16:52.510964 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:16:52.536050 master-0 kubenswrapper[17411]: I0223 13:16:52.535985 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:16:52.625465 master-0 kubenswrapper[17411]: I0223 13:16:52.625382 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:16:52.666068 master-0 kubenswrapper[17411]: I0223 13:16:52.665938 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:16:52.666701 master-0 kubenswrapper[17411]: I0223 13:16:52.666641 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 23 13:16:52.676607 master-0 kubenswrapper[17411]: I0223 13:16:52.676549 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-drk2j_03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4/etcd-operator/2.log" Feb 23 13:16:52.676744 master-0 kubenswrapper[17411]: I0223 13:16:52.676677 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-drk2j" event={"ID":"03da8bbe-c1b1-4f3f-acec-d1dd0c8afae4","Type":"ContainerStarted","Data":"54b087e9ee108804c83f34dae932e28c6f3e8442e9d58b1055639fcffbee774e"} Feb 23 13:16:52.678215 master-0 kubenswrapper[17411]: I0223 13:16:52.678190 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859_f88d6ed3-c0a6-4eef-b80c-417994cf69b0/cluster-storage-operator/1.log" Feb 23 13:16:52.679404 master-0 kubenswrapper[17411]: I0223 13:16:52.678751 17411 generic.go:334] "Generic (PLEG): container finished" podID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" containerID="eaf5c82575ca53cf64738eafa679d56a86938238183995384c4ed1f6782f3ea2" exitCode=255 Feb 23 13:16:52.679404 master-0 kubenswrapper[17411]: I0223 13:16:52.678807 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerDied","Data":"eaf5c82575ca53cf64738eafa679d56a86938238183995384c4ed1f6782f3ea2"} Feb 23 13:16:52.679404 master-0 kubenswrapper[17411]: I0223 13:16:52.678853 17411 scope.go:117] "RemoveContainer" containerID="2a82c81816ea58ba55512744c24143ddbc2f5aefd0d2aef524a9297835676cb3" Feb 23 13:16:52.680382 master-0 kubenswrapper[17411]: I0223 13:16:52.680326 17411 scope.go:117] "RemoveContainer" containerID="eaf5c82575ca53cf64738eafa679d56a86938238183995384c4ed1f6782f3ea2" Feb 23 13:16:52.680802 master-0 kubenswrapper[17411]: E0223 13:16:52.680734 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-f94476f49-ck859_openshift-cluster-storage-operator(f88d6ed3-c0a6-4eef-b80c-417994cf69b0)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" podUID="f88d6ed3-c0a6-4eef-b80c-417994cf69b0" Feb 23 13:16:52.707301 master-0 kubenswrapper[17411]: I0223 13:16:52.707164 17411 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:16:52.773402 master-0 kubenswrapper[17411]: I0223 13:16:52.773288 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=45.773232975 podStartE2EDuration="45.773232975s" podCreationTimestamp="2026-02-23 13:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:40.885171993 +0000 UTC m=+594.312678600" watchObservedRunningTime="2026-02-23 13:16:52.773232975 +0000 UTC m=+606.200739592" Feb 23 13:16:52.775192 master-0 kubenswrapper[17411]: I0223 13:16:52.775150 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:16:52.775267 master-0 kubenswrapper[17411]: I0223 13:16:52.775213 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 23 13:16:52.851430 master-0 kubenswrapper[17411]: I0223 13:16:52.851369 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:16:52.871294 master-0 kubenswrapper[17411]: I0223 13:16:52.869643 17411 scope.go:117] "RemoveContainer" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" Feb 23 13:16:52.871514 master-0 kubenswrapper[17411]: E0223 13:16:52.871335 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:16:52.906050 master-0 kubenswrapper[17411]: I0223 13:16:52.905981 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 23 13:16:52.930638 master-0 kubenswrapper[17411]: I0223 13:16:52.930486 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:16:53.019478 master-0 kubenswrapper[17411]: I0223 13:16:52.983443 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:16:53.038441 master-0 kubenswrapper[17411]: I0223 13:16:53.036560 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=13.036532937 podStartE2EDuration="13.036532937s" podCreationTimestamp="2026-02-23 13:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:16:53.035593981 +0000 UTC m=+606.463100598" watchObservedRunningTime="2026-02-23 13:16:53.036532937 +0000 UTC m=+606.464039534" Feb 23 13:16:53.076158 master-0 kubenswrapper[17411]: I0223 13:16:53.076081 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": read tcp 10.128.0.2:39220->10.128.0.89:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:53.076399 master-0 kubenswrapper[17411]: I0223 13:16:53.076171 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": read tcp 10.128.0.2:39220->10.128.0.89:8443: read: connection reset by peer" Feb 23 13:16:53.077191 master-0 kubenswrapper[17411]: I0223 13:16:53.076551 17411 patch_prober.go:28] interesting pod/route-controller-manager-78784b9d57-r4sf8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/healthz\": read tcp 10.128.0.2:39232->10.128.0.89:8443: read: connection reset by peer" start-of-body= Feb 23 13:16:53.077191 master-0 kubenswrapper[17411]: I0223 13:16:53.076574 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.89:8443/healthz\": read tcp 10.128.0.2:39232->10.128.0.89:8443: read: connection reset by peer" Feb 23 13:16:53.203342 master-0 kubenswrapper[17411]: I0223 13:16:53.203218 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:16:53.362349 master-0 kubenswrapper[17411]: I0223 13:16:53.362229 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 23 13:16:53.432546 master-0 kubenswrapper[17411]: I0223 13:16:53.432466 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:16:53.520545 master-0 kubenswrapper[17411]: I0223 13:16:53.520457 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:16:53.542404 master-0 kubenswrapper[17411]: I0223 13:16:53.542316 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:16:53.558492 master-0 kubenswrapper[17411]: I0223 13:16:53.558444 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hjsc8" Feb 23 13:16:53.577773 master-0 kubenswrapper[17411]: I0223 13:16:53.577711 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:16:53.600269 master-0 kubenswrapper[17411]: I0223 13:16:53.600166 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 23 13:16:53.624713 master-0 kubenswrapper[17411]: I0223 13:16:53.624609 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:16:53.693743 master-0 kubenswrapper[17411]: I0223 13:16:53.693631 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-6wk86_ae1799b6-85b0-4aed-8835-35cb3d8d1109/openshift-apiserver-operator/2.log" Feb 23 13:16:53.694196 master-0 kubenswrapper[17411]: I0223 13:16:53.694143 17411 generic.go:334] "Generic (PLEG): container finished" podID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" containerID="fef4f8449d382c2b35398416206a546296a87b3c5b9bd1199e39bfceb5c14dae" exitCode=255 Feb 23 13:16:53.694320 master-0 kubenswrapper[17411]: I0223 13:16:53.694234 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerDied","Data":"fef4f8449d382c2b35398416206a546296a87b3c5b9bd1199e39bfceb5c14dae"} Feb 23 13:16:53.694387 master-0 kubenswrapper[17411]: I0223 13:16:53.694346 17411 scope.go:117] "RemoveContainer" containerID="2232814e0e6f0bab57129339d23cb902f8963539e1dee1b616d27df4af9358d9" Feb 23 13:16:53.696729 master-0 kubenswrapper[17411]: I0223 13:16:53.696047 17411 scope.go:117] "RemoveContainer" containerID="fef4f8449d382c2b35398416206a546296a87b3c5b9bd1199e39bfceb5c14dae" Feb 23 13:16:53.696729 master-0 kubenswrapper[17411]: E0223 13:16:53.696491 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-8586dccc9b-6wk86_openshift-apiserver-operator(ae1799b6-85b0-4aed-8835-35cb3d8d1109)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" podUID="ae1799b6-85b0-4aed-8835-35cb3d8d1109" Feb 23 13:16:53.698654 master-0 kubenswrapper[17411]: I0223 13:16:53.697828 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859_f88d6ed3-c0a6-4eef-b80c-417994cf69b0/cluster-storage-operator/1.log" Feb 23 13:16:53.701730 master-0 kubenswrapper[17411]: I0223 13:16:53.701669 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-78784b9d57-r4sf8_dc1620b0-3903-418b-9dd2-1f99bc5a0ae8/route-controller-manager/1.log" Feb 23 13:16:53.702550 master-0 kubenswrapper[17411]: I0223 13:16:53.702450 17411 generic.go:334] "Generic (PLEG): container finished" podID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" containerID="2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339" exitCode=255 Feb 23 13:16:53.702812 master-0 kubenswrapper[17411]: I0223 13:16:53.702561 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerDied","Data":"2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339"} Feb 23 13:16:53.703511 master-0 kubenswrapper[17411]: I0223 13:16:53.703477 17411 scope.go:117] "RemoveContainer" containerID="2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339" Feb 23 13:16:53.704130 master-0 kubenswrapper[17411]: E0223 13:16:53.703827 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-78784b9d57-r4sf8_openshift-route-controller-manager(dc1620b0-3903-418b-9dd2-1f99bc5a0ae8)\"" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" Feb 23 13:16:53.705526 master-0 kubenswrapper[17411]: I0223 13:16:53.705478 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-ccvpn_3ab71705-d574-4f95-b3fc-9f7cf5e8a557/kube-storage-version-migrator-operator/2.log" Feb 23 13:16:53.706283 master-0 kubenswrapper[17411]: I0223 13:16:53.706178 17411 generic.go:334] "Generic (PLEG): container finished" podID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" containerID="fec2b56ffa3c2fda91463659eb4be75b35169045cf2435badc161811557532bd" exitCode=255 Feb 23 13:16:53.706422 master-0 kubenswrapper[17411]: I0223 13:16:53.706305 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerDied","Data":"fec2b56ffa3c2fda91463659eb4be75b35169045cf2435badc161811557532bd"} Feb 23 13:16:53.706872 master-0 kubenswrapper[17411]: I0223 13:16:53.706825 17411 scope.go:117] "RemoveContainer" containerID="fec2b56ffa3c2fda91463659eb4be75b35169045cf2435badc161811557532bd" Feb 23 13:16:53.707407 master-0 kubenswrapper[17411]: E0223 13:16:53.707198 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-fc889cfd5-ccvpn_openshift-kube-storage-version-migrator-operator(3ab71705-d574-4f95-b3fc-9f7cf5e8a557)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" podUID="3ab71705-d574-4f95-b3fc-9f7cf5e8a557" Feb 23 13:16:53.714403 master-0 kubenswrapper[17411]: I0223 13:16:53.713182 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/2.log" Feb 23 13:16:53.714951 master-0 kubenswrapper[17411]: I0223 13:16:53.714880 17411 generic.go:334] "Generic (PLEG): container finished" podID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" containerID="b51fc341743d0ee14779ec259987403cb18ccfb83872ba04b66accc494822766" exitCode=255 Feb 23 13:16:53.715115 master-0 kubenswrapper[17411]: I0223 13:16:53.715077 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerDied","Data":"b51fc341743d0ee14779ec259987403cb18ccfb83872ba04b66accc494822766"} Feb 23 13:16:53.716144 master-0 kubenswrapper[17411]: I0223 13:16:53.716089 17411 scope.go:117] "RemoveContainer" containerID="b51fc341743d0ee14779ec259987403cb18ccfb83872ba04b66accc494822766" Feb 23 13:16:53.716682 master-0 kubenswrapper[17411]: E0223 13:16:53.716611 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-584cc7bcb5-t9gx8_openshift-controller-manager-operator(99399ebb-c95f-4663-b3b6-f5dfabf47fcf)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" podUID="99399ebb-c95f-4663-b3b6-f5dfabf47fcf" Feb 23 13:16:53.720661 master-0 kubenswrapper[17411]: I0223 13:16:53.720590 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-jpf5n_b1970ec8-620e-4529-bf3b-1cf9a52c27d3/kube-controller-manager-operator/2.log" Feb 23 13:16:53.721396 master-0 kubenswrapper[17411]: I0223 13:16:53.721351 17411 generic.go:334] "Generic (PLEG): container finished" podID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" containerID="1a0344d531e84ba87458cf9e245595bf26beb8556c42c2a98575065196b12964" exitCode=255 Feb 23 13:16:53.721505 master-0 kubenswrapper[17411]: I0223 13:16:53.721454 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerDied","Data":"1a0344d531e84ba87458cf9e245595bf26beb8556c42c2a98575065196b12964"} Feb 23 13:16:53.722034 master-0 kubenswrapper[17411]: I0223 13:16:53.721985 17411 scope.go:117] "RemoveContainer" containerID="1a0344d531e84ba87458cf9e245595bf26beb8556c42c2a98575065196b12964" Feb 23 13:16:53.722556 master-0 kubenswrapper[17411]: E0223 13:16:53.722224 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-7bcfbc574b-jpf5n_openshift-kube-controller-manager-operator(b1970ec8-620e-4529-bf3b-1cf9a52c27d3)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" podUID="b1970ec8-620e-4529-bf3b-1cf9a52c27d3" Feb 23 13:16:53.725143 master-0 kubenswrapper[17411]: I0223 13:16:53.725088 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-rvccp_25b5540c-da7d-4b6f-a15f-394451f4674e/service-ca-operator/2.log" Feb 23 13:16:53.726600 master-0 kubenswrapper[17411]: I0223 13:16:53.726554 17411 generic.go:334] "Generic (PLEG): container finished" podID="25b5540c-da7d-4b6f-a15f-394451f4674e" containerID="b4325f84094f6a5f8ce69935fd5dcef125ec5b0e7208b70b7184af2ce6c4e6e7" exitCode=255 Feb 23 13:16:53.726710 master-0 kubenswrapper[17411]: I0223 13:16:53.726631 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerDied","Data":"b4325f84094f6a5f8ce69935fd5dcef125ec5b0e7208b70b7184af2ce6c4e6e7"} Feb 23 13:16:53.727285 master-0 kubenswrapper[17411]: I0223 13:16:53.727216 17411 scope.go:117] "RemoveContainer" containerID="b4325f84094f6a5f8ce69935fd5dcef125ec5b0e7208b70b7184af2ce6c4e6e7" Feb 23 13:16:53.727631 master-0 kubenswrapper[17411]: E0223 13:16:53.727590 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-c48c8bf7c-rvccp_openshift-service-ca-operator(25b5540c-da7d-4b6f-a15f-394451f4674e)\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" podUID="25b5540c-da7d-4b6f-a15f-394451f4674e" Feb 23 13:16:53.727749 master-0 kubenswrapper[17411]: I0223 13:16:53.727718 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:16:53.812868 master-0 kubenswrapper[17411]: I0223 13:16:53.812806 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:16:53.821040 master-0 kubenswrapper[17411]: I0223 13:16:53.820996 17411 scope.go:117] "RemoveContainer" containerID="2ce8dd30e28f7373e2d6bc5d3ffecbad9102db5068c6325288481dd16f27c6a9" Feb 23 13:16:53.855653 master-0 kubenswrapper[17411]: I0223 13:16:53.855347 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:16:53.855653 master-0 kubenswrapper[17411]: I0223 13:16:53.855528 17411 scope.go:117] "RemoveContainer" containerID="6eb708e99faa68cc0fb3a1744a6c33cf30aa202ca3b55e421e64cd3dbc5a07f1" Feb 23 13:16:53.892162 master-0 kubenswrapper[17411]: I0223 13:16:53.892098 17411 scope.go:117] "RemoveContainer" containerID="276f3b55300c4b42b7df0ff3b3561d901d7c658a4848ac016dd56a91f3b44118" Feb 23 13:16:53.926091 master-0 kubenswrapper[17411]: I0223 13:16:53.926044 17411 scope.go:117] "RemoveContainer" containerID="90c4d565bc8a9a3504b08ffb42ce37fbe9564d90f4149f9a2efe531a546f0e50" Feb 23 13:16:53.963958 master-0 kubenswrapper[17411]: I0223 13:16:53.963909 17411 scope.go:117] "RemoveContainer" containerID="93e9de56164a0387038f634504ac664a837d38dcf48d420691331e0584258696" Feb 23 13:16:54.256470 master-0 kubenswrapper[17411]: I0223 13:16:54.256307 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:16:54.498853 master-0 kubenswrapper[17411]: I0223 13:16:54.498767 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:16:54.742004 master-0 kubenswrapper[17411]: I0223 13:16:54.741867 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-nds57_71a07622-3038-4b8c-b6bb-5f28a4115012/service-ca-controller/1.log" Feb 23 13:16:54.742673 master-0 kubenswrapper[17411]: I0223 13:16:54.742594 17411 generic.go:334] "Generic (PLEG): container finished" podID="71a07622-3038-4b8c-b6bb-5f28a4115012" containerID="a46afb690c12f34d591fbefec336bbc94039270416c52a883ecc6b6372765700" exitCode=255 Feb 23 13:16:54.742673 master-0 kubenswrapper[17411]: I0223 13:16:54.742635 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerDied","Data":"a46afb690c12f34d591fbefec336bbc94039270416c52a883ecc6b6372765700"} Feb 23 13:16:54.743087 master-0 kubenswrapper[17411]: I0223 13:16:54.742689 17411 scope.go:117] "RemoveContainer" containerID="049f73307f806904035423cc3efd5b594e3e2163521bdc03014ba97dd009ed14" Feb 23 13:16:54.743182 master-0 kubenswrapper[17411]: I0223 13:16:54.743104 17411 scope.go:117] "RemoveContainer" containerID="a46afb690c12f34d591fbefec336bbc94039270416c52a883ecc6b6372765700" Feb 23 13:16:54.743462 master-0 kubenswrapper[17411]: E0223 13:16:54.743407 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-controller pod=service-ca-576b4d78bd-nds57_openshift-service-ca(71a07622-3038-4b8c-b6bb-5f28a4115012)\"" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" podUID="71a07622-3038-4b8c-b6bb-5f28a4115012" Feb 23 13:16:54.746913 master-0 kubenswrapper[17411]: I0223 13:16:54.746864 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-dgldn_4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/kube-apiserver-operator/2.log" Feb 23 13:16:54.747814 master-0 kubenswrapper[17411]: I0223 13:16:54.747730 17411 generic.go:334] "Generic (PLEG): container finished" podID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" containerID="58697c87cd4c1a073964d8c5dbb45b8508190c35e0ffc3e1b2ec68e7b6317288" exitCode=255 Feb 23 13:16:54.747814 master-0 kubenswrapper[17411]: I0223 13:16:54.747789 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerDied","Data":"58697c87cd4c1a073964d8c5dbb45b8508190c35e0ffc3e1b2ec68e7b6317288"} Feb 23 13:16:54.748622 master-0 kubenswrapper[17411]: I0223 13:16:54.748573 17411 scope.go:117] "RemoveContainer" containerID="58697c87cd4c1a073964d8c5dbb45b8508190c35e0ffc3e1b2ec68e7b6317288" Feb 23 13:16:54.749039 master-0 kubenswrapper[17411]: E0223 13:16:54.748978 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-5d87bf58c-dgldn_openshift-kube-apiserver-operator(4a4b185e-17da-4711-a7b2-c2a9e1cd7b30)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" podUID="4a4b185e-17da-4711-a7b2-c2a9e1cd7b30" Feb 23 13:16:54.750541 master-0 kubenswrapper[17411]: I0223 13:16:54.750492 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/6.log" Feb 23 13:16:54.750975 master-0 kubenswrapper[17411]: I0223 13:16:54.750940 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/5.log" Feb 23 13:16:54.751090 master-0 kubenswrapper[17411]: I0223 13:16:54.750975 17411 generic.go:334] "Generic (PLEG): container finished" podID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" exitCode=1 Feb 23 13:16:54.751090 master-0 kubenswrapper[17411]: I0223 13:16:54.751023 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerDied","Data":"892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8"} Feb 23 13:16:54.751393 master-0 kubenswrapper[17411]: I0223 13:16:54.751364 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:16:54.751653 master-0 kubenswrapper[17411]: E0223 13:16:54.751581 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:16:54.754013 master-0 kubenswrapper[17411]: I0223 13:16:54.753947 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-sx924_b7585f9f-12e5-451b-beeb-db43ae778f25/csi-snapshot-controller-operator/1.log" Feb 23 13:16:54.754529 master-0 kubenswrapper[17411]: I0223 13:16:54.754480 17411 generic.go:334] "Generic (PLEG): container finished" podID="b7585f9f-12e5-451b-beeb-db43ae778f25" containerID="9b83034b1e523498c93eb4e5fde2c67e0c10856a13b30b5b22d21e82983a70f1" exitCode=255 Feb 23 13:16:54.754649 master-0 kubenswrapper[17411]: I0223 13:16:54.754554 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerDied","Data":"9b83034b1e523498c93eb4e5fde2c67e0c10856a13b30b5b22d21e82983a70f1"} Feb 23 13:16:54.754909 master-0 kubenswrapper[17411]: I0223 13:16:54.754865 17411 scope.go:117] "RemoveContainer" containerID="9b83034b1e523498c93eb4e5fde2c67e0c10856a13b30b5b22d21e82983a70f1" Feb 23 13:16:54.755089 master-0 kubenswrapper[17411]: E0223 13:16:54.755053 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-snapshot-controller-operator pod=csi-snapshot-controller-operator-6fb4df594f-sx924_openshift-cluster-storage-operator(b7585f9f-12e5-451b-beeb-db43ae778f25)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" podUID="b7585f9f-12e5-451b-beeb-db43ae778f25" Feb 23 13:16:54.756997 master-0 kubenswrapper[17411]: I0223 13:16:54.756944 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-78784b9d57-r4sf8_dc1620b0-3903-418b-9dd2-1f99bc5a0ae8/route-controller-manager/1.log" Feb 23 13:16:54.765468 master-0 kubenswrapper[17411]: I0223 13:16:54.762699 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-jpf5n_b1970ec8-620e-4529-bf3b-1cf9a52c27d3/kube-controller-manager-operator/2.log" Feb 23 13:16:54.765468 master-0 kubenswrapper[17411]: I0223 13:16:54.764903 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-rvccp_25b5540c-da7d-4b6f-a15f-394451f4674e/service-ca-operator/2.log" Feb 23 13:16:54.767025 master-0 kubenswrapper[17411]: I0223 13:16:54.766990 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-6wk86_ae1799b6-85b0-4aed-8835-35cb3d8d1109/openshift-apiserver-operator/2.log" Feb 23 13:16:54.775269 master-0 kubenswrapper[17411]: I0223 13:16:54.775193 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-ccvpn_3ab71705-d574-4f95-b3fc-9f7cf5e8a557/kube-storage-version-migrator-operator/2.log" Feb 23 13:16:54.784295 master-0 kubenswrapper[17411]: I0223 13:16:54.779763 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:16:54.784295 master-0 kubenswrapper[17411]: I0223 13:16:54.780472 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/2.log" Feb 23 13:16:54.804821 master-0 kubenswrapper[17411]: I0223 13:16:54.804755 17411 scope.go:117] "RemoveContainer" containerID="5746b4ef817cfb0913d62f6abec0cfefcc90fea76e17ad5446db2699e58dc8b7" Feb 23 13:16:54.825422 master-0 kubenswrapper[17411]: I0223 13:16:54.825373 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:16:54.844036 master-0 kubenswrapper[17411]: I0223 13:16:54.843978 17411 scope.go:117] "RemoveContainer" containerID="7542932db8ce52dd0433bcdb6da61f01bd8b820ad9cbce4b661a7f58c10cfefe" Feb 23 13:16:54.866514 master-0 kubenswrapper[17411]: I0223 13:16:54.866460 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 23 13:16:54.888610 master-0 kubenswrapper[17411]: I0223 13:16:54.888565 17411 scope.go:117] "RemoveContainer" containerID="e56396e411b12f7186290221f3fddfff3f3b0e11c3f756be37a285081dee7384" Feb 23 13:16:55.030032 master-0 kubenswrapper[17411]: I0223 13:16:55.029849 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 23 13:16:55.075637 master-0 kubenswrapper[17411]: I0223 13:16:55.075575 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:16:55.102752 master-0 kubenswrapper[17411]: I0223 13:16:55.102508 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:16:55.126593 master-0 kubenswrapper[17411]: I0223 13:16:55.126553 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:16:55.129532 master-0 kubenswrapper[17411]: I0223 13:16:55.129513 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:16:55.136284 master-0 kubenswrapper[17411]: I0223 13:16:55.136168 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-sxjzf" Feb 23 13:16:55.239202 master-0 kubenswrapper[17411]: I0223 13:16:55.239156 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:16:55.252070 master-0 kubenswrapper[17411]: I0223 13:16:55.252011 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:16:55.427923 master-0 kubenswrapper[17411]: I0223 13:16:55.427799 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:16:55.471657 master-0 kubenswrapper[17411]: I0223 13:16:55.471603 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:16:55.525064 master-0 kubenswrapper[17411]: I0223 13:16:55.525000 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:16:55.639501 master-0 kubenswrapper[17411]: I0223 13:16:55.639423 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:16:55.712902 master-0 kubenswrapper[17411]: I0223 13:16:55.712829 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-8ns2k" Feb 23 13:16:55.737984 master-0 kubenswrapper[17411]: I0223 13:16:55.737911 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:16:55.743472 master-0 kubenswrapper[17411]: I0223 13:16:55.743406 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:16:55.780870 master-0 kubenswrapper[17411]: I0223 13:16:55.780682 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:16:55.797291 master-0 kubenswrapper[17411]: I0223 13:16:55.797202 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-dgldn_4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/kube-apiserver-operator/2.log" Feb 23 13:16:55.799364 master-0 kubenswrapper[17411]: I0223 13:16:55.799229 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/6.log" Feb 23 13:16:55.801635 master-0 kubenswrapper[17411]: I0223 13:16:55.801577 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-sx924_b7585f9f-12e5-451b-beeb-db43ae778f25/csi-snapshot-controller-operator/1.log" Feb 23 13:16:55.803740 master-0 kubenswrapper[17411]: I0223 13:16:55.803688 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-nds57_71a07622-3038-4b8c-b6bb-5f28a4115012/service-ca-controller/1.log" Feb 23 13:16:55.839706 master-0 kubenswrapper[17411]: I0223 13:16:55.839626 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:16:55.869171 master-0 kubenswrapper[17411]: I0223 13:16:55.869091 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 23 13:16:55.931639 master-0 kubenswrapper[17411]: I0223 13:16:55.931565 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:16:55.947230 master-0 kubenswrapper[17411]: I0223 13:16:55.947146 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 23 13:16:56.023586 master-0 kubenswrapper[17411]: I0223 13:16:56.023404 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 23 13:16:56.137216 master-0 kubenswrapper[17411]: I0223 13:16:56.137136 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:16:56.156924 master-0 kubenswrapper[17411]: I0223 13:16:56.156839 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:16:56.257495 master-0 kubenswrapper[17411]: I0223 13:16:56.257406 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 23 13:16:56.331068 master-0 kubenswrapper[17411]: I0223 13:16:56.330928 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 23 13:16:56.379742 master-0 kubenswrapper[17411]: I0223 13:16:56.379658 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:16:56.419093 master-0 kubenswrapper[17411]: I0223 13:16:56.419011 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:16:56.431385 master-0 kubenswrapper[17411]: I0223 13:16:56.431308 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:16:56.434887 master-0 kubenswrapper[17411]: I0223 13:16:56.434791 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:16:56.434887 master-0 kubenswrapper[17411]: I0223 13:16:56.434860 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-wbd45" Feb 23 13:16:56.448078 master-0 kubenswrapper[17411]: I0223 13:16:56.447994 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:16:56.519718 master-0 kubenswrapper[17411]: I0223 13:16:56.519611 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:16:56.549953 master-0 kubenswrapper[17411]: I0223 13:16:56.549863 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:16:56.551003 master-0 kubenswrapper[17411]: I0223 13:16:56.550077 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 23 13:16:56.557718 master-0 kubenswrapper[17411]: I0223 13:16:56.557625 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 23 13:16:56.562445 master-0 kubenswrapper[17411]: I0223 13:16:56.562376 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-5499c" Feb 23 13:16:56.613489 master-0 kubenswrapper[17411]: I0223 13:16:56.603590 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:16:56.613489 master-0 kubenswrapper[17411]: I0223 13:16:56.606652 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:16:56.623678 master-0 kubenswrapper[17411]: I0223 13:16:56.623576 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:16:56.626888 master-0 kubenswrapper[17411]: I0223 13:16:56.626290 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:16:56.658641 master-0 kubenswrapper[17411]: I0223 13:16:56.658592 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:16:56.699954 master-0 kubenswrapper[17411]: I0223 13:16:56.699886 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:16:56.742703 master-0 kubenswrapper[17411]: I0223 13:16:56.742631 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:16:56.833431 master-0 kubenswrapper[17411]: I0223 13:16:56.833347 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:16:56.854658 master-0 kubenswrapper[17411]: I0223 13:16:56.854593 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:16:56.864135 master-0 kubenswrapper[17411]: I0223 13:16:56.864045 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:16:56.882218 master-0 kubenswrapper[17411]: I0223 13:16:56.882125 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 23 13:16:56.897827 master-0 kubenswrapper[17411]: I0223 13:16:56.897762 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:16:56.960309 master-0 kubenswrapper[17411]: I0223 13:16:56.959347 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:16:56.974933 master-0 kubenswrapper[17411]: I0223 13:16:56.974863 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 23 13:16:57.013568 master-0 kubenswrapper[17411]: I0223 13:16:57.013463 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:16:57.054662 master-0 kubenswrapper[17411]: I0223 13:16:57.054559 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:16:57.093667 master-0 kubenswrapper[17411]: I0223 13:16:57.093555 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:16:57.111711 master-0 kubenswrapper[17411]: I0223 13:16:57.111595 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 23 13:16:57.140136 master-0 kubenswrapper[17411]: I0223 13:16:57.139958 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:16:57.141940 master-0 kubenswrapper[17411]: I0223 13:16:57.141889 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:16:57.313855 master-0 kubenswrapper[17411]: I0223 13:16:57.313766 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8odpr3ab0635p" Feb 23 13:16:57.317540 master-0 kubenswrapper[17411]: I0223 13:16:57.317484 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:16:57.417555 master-0 kubenswrapper[17411]: I0223 13:16:57.417366 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:16:57.423690 master-0 kubenswrapper[17411]: I0223 13:16:57.423640 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-zmw9t" Feb 23 13:16:57.432224 master-0 kubenswrapper[17411]: I0223 13:16:57.432141 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:16:57.457498 master-0 kubenswrapper[17411]: I0223 13:16:57.457406 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 23 13:16:57.555929 master-0 kubenswrapper[17411]: I0223 13:16:57.555846 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:57.556870 master-0 kubenswrapper[17411]: I0223 13:16:57.555943 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:57.556870 master-0 kubenswrapper[17411]: I0223 13:16:57.555959 17411 patch_prober.go:28] interesting pod/console-operator-5df5ffc47c-zwmzz container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:16:57.556870 master-0 kubenswrapper[17411]: I0223 13:16:57.556094 17411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" podUID="679fabb5-a261-402e-b5be-8fe7f0da0ec8" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:16:57.682720 master-0 kubenswrapper[17411]: I0223 13:16:57.682576 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 23 13:16:57.696292 master-0 kubenswrapper[17411]: I0223 13:16:57.696223 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:16:57.726616 master-0 kubenswrapper[17411]: I0223 13:16:57.726548 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:16:57.814264 master-0 kubenswrapper[17411]: I0223 13:16:57.814197 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:16:57.821282 master-0 kubenswrapper[17411]: I0223 13:16:57.821154 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/3.log" Feb 23 13:16:57.822219 master-0 kubenswrapper[17411]: I0223 13:16:57.822014 17411 generic.go:334] "Generic (PLEG): container finished" podID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" containerID="1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3" exitCode=255 Feb 23 13:16:57.822219 master-0 kubenswrapper[17411]: I0223 13:16:57.822065 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerDied","Data":"1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3"} Feb 23 13:16:57.822219 master-0 kubenswrapper[17411]: I0223 13:16:57.822110 17411 scope.go:117] "RemoveContainer" containerID="28759b105ef16fc9766c38f67df6c142da73e18661733246b760f77ad371c2c7" Feb 23 13:16:57.822830 master-0 kubenswrapper[17411]: I0223 13:16:57.822798 17411 scope.go:117] "RemoveContainer" containerID="1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3" Feb 23 13:16:57.823087 master-0 kubenswrapper[17411]: E0223 13:16:57.823049 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-ld4gj_openshift-authentication-operator(f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" Feb 23 13:16:57.835319 master-0 kubenswrapper[17411]: I0223 13:16:57.835126 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:16:57.868599 master-0 kubenswrapper[17411]: I0223 13:16:57.868547 17411 scope.go:117] "RemoveContainer" containerID="67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93" Feb 23 13:16:57.868826 master-0 kubenswrapper[17411]: E0223 13:16:57.868795 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-p5488_openshift-config-operator(c2b80534-3c9d-4ddb-9215-d50d63294c7c)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" podUID="c2b80534-3c9d-4ddb-9215-d50d63294c7c" Feb 23 13:16:57.963564 master-0 kubenswrapper[17411]: I0223 13:16:57.963478 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:16:57.993909 master-0 kubenswrapper[17411]: I0223 13:16:57.993834 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:16:57.995508 master-0 kubenswrapper[17411]: I0223 13:16:57.995478 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:16:58.116673 master-0 kubenswrapper[17411]: I0223 13:16:58.116605 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9whd7" Feb 23 13:16:58.183891 master-0 kubenswrapper[17411]: I0223 13:16:58.183844 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:16:58.365349 master-0 kubenswrapper[17411]: I0223 13:16:58.364791 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 23 13:16:58.365349 master-0 kubenswrapper[17411]: I0223 13:16:58.365134 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:16:58.388734 master-0 kubenswrapper[17411]: I0223 13:16:58.388666 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 23 13:16:58.423986 master-0 kubenswrapper[17411]: I0223 13:16:58.423931 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:16:58.426460 master-0 kubenswrapper[17411]: I0223 13:16:58.426397 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:16:58.449478 master-0 kubenswrapper[17411]: I0223 13:16:58.449419 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:16:58.473898 master-0 kubenswrapper[17411]: I0223 13:16:58.473825 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:16:58.532548 master-0 kubenswrapper[17411]: I0223 13:16:58.532443 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:16:58.599021 master-0 kubenswrapper[17411]: I0223 13:16:58.598933 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:16:58.619138 master-0 kubenswrapper[17411]: I0223 13:16:58.618988 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:16:58.620331 master-0 kubenswrapper[17411]: I0223 13:16:58.620237 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:16:58.628295 master-0 kubenswrapper[17411]: I0223 13:16:58.628214 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:16:58.676284 master-0 kubenswrapper[17411]: I0223 13:16:58.676179 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:16:58.684821 master-0 kubenswrapper[17411]: I0223 13:16:58.684758 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 23 13:16:58.766443 master-0 kubenswrapper[17411]: I0223 13:16:58.765923 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:16:58.773753 master-0 kubenswrapper[17411]: I0223 13:16:58.773675 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:16:58.796262 master-0 kubenswrapper[17411]: I0223 13:16:58.796194 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 23 13:16:58.803588 master-0 kubenswrapper[17411]: I0223 13:16:58.803528 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:16:58.830360 master-0 kubenswrapper[17411]: I0223 13:16:58.830307 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/3.log" Feb 23 13:16:58.843217 master-0 kubenswrapper[17411]: I0223 13:16:58.843163 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 23 13:16:58.861310 master-0 kubenswrapper[17411]: I0223 13:16:58.861204 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:16:58.886890 master-0 kubenswrapper[17411]: I0223 13:16:58.886763 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-54m2k" Feb 23 13:16:58.997790 master-0 kubenswrapper[17411]: I0223 13:16:58.997732 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 23 13:16:59.023649 master-0 kubenswrapper[17411]: I0223 13:16:59.023590 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:16:59.093966 master-0 kubenswrapper[17411]: I0223 13:16:59.093914 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:16:59.252999 master-0 kubenswrapper[17411]: I0223 13:16:59.252937 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:16:59.278149 master-0 kubenswrapper[17411]: I0223 13:16:59.278091 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:16:59.330327 master-0 kubenswrapper[17411]: I0223 13:16:59.330288 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 23 13:16:59.357274 master-0 kubenswrapper[17411]: I0223 13:16:59.357224 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:16:59.388670 master-0 kubenswrapper[17411]: I0223 13:16:59.388620 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq"] Feb 23 13:16:59.389049 master-0 kubenswrapper[17411]: I0223 13:16:59.389005 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:16:59.389454 master-0 kubenswrapper[17411]: E0223 13:16:59.389431 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" containerName="installer" Feb 23 13:16:59.389559 master-0 kubenswrapper[17411]: I0223 13:16:59.389545 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" containerName="installer" Feb 23 13:16:59.389859 master-0 kubenswrapper[17411]: I0223 13:16:59.389842 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f6e482-8da1-4df0-8de6-66a930e45a20" containerName="installer" Feb 23 13:16:59.390575 master-0 kubenswrapper[17411]: I0223 13:16:59.390552 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.392829 master-0 kubenswrapper[17411]: I0223 13:16:59.392785 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:16:59.404613 master-0 kubenswrapper[17411]: I0223 13:16:59.403805 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:16:59.405506 master-0 kubenswrapper[17411]: I0223 13:16:59.405449 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq"] Feb 23 13:16:59.405610 master-0 kubenswrapper[17411]: I0223 13:16:59.405554 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:16:59.424592 master-0 kubenswrapper[17411]: I0223 13:16:59.424534 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.424592 master-0 kubenswrapper[17411]: I0223 13:16:59.424584 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.424838 master-0 kubenswrapper[17411]: I0223 13:16:59.424672 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvqg\" (UniqueName: \"kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.424838 master-0 kubenswrapper[17411]: I0223 13:16:59.424795 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-h78lq" Feb 23 13:16:59.524386 master-0 kubenswrapper[17411]: I0223 13:16:59.524238 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:16:59.525502 master-0 kubenswrapper[17411]: I0223 13:16:59.525461 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btvqg\" (UniqueName: \"kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.525674 master-0 kubenswrapper[17411]: I0223 13:16:59.525526 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.525674 master-0 kubenswrapper[17411]: I0223 13:16:59.525547 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.526381 master-0 kubenswrapper[17411]: I0223 13:16:59.526344 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.531921 master-0 kubenswrapper[17411]: I0223 13:16:59.531882 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.534196 master-0 kubenswrapper[17411]: I0223 13:16:59.534013 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:16:59.544691 master-0 kubenswrapper[17411]: I0223 13:16:59.544632 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btvqg\" (UniqueName: \"kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg\") pod \"collect-profiles-29530875-tsdvq\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.570097 master-0 kubenswrapper[17411]: I0223 13:16:59.570032 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:16:59.716972 master-0 kubenswrapper[17411]: I0223 13:16:59.716330 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:16:59.727654 master-0 kubenswrapper[17411]: I0223 13:16:59.727579 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:16:59.736726 master-0 kubenswrapper[17411]: I0223 13:16:59.734743 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 23 13:16:59.939903 master-0 kubenswrapper[17411]: I0223 13:16:59.939847 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" Feb 23 13:16:59.940634 master-0 kubenswrapper[17411]: I0223 13:16:59.940611 17411 scope.go:117] "RemoveContainer" containerID="1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3" Feb 23 13:16:59.940909 master-0 kubenswrapper[17411]: E0223 13:16:59.940880 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-ld4gj_openshift-authentication-operator(f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" podUID="f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8" Feb 23 13:16:59.957350 master-0 kubenswrapper[17411]: I0223 13:16:59.955548 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 23 13:16:59.972665 master-0 kubenswrapper[17411]: I0223 13:16:59.972599 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:17:00.014013 master-0 kubenswrapper[17411]: I0223 13:17:00.013932 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:17:00.046319 master-0 kubenswrapper[17411]: I0223 13:17:00.045148 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:17:00.074923 master-0 kubenswrapper[17411]: I0223 13:17:00.074423 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:17:00.097967 master-0 kubenswrapper[17411]: I0223 13:17:00.097915 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:17:00.148740 master-0 kubenswrapper[17411]: I0223 13:17:00.148673 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:17:00.216515 master-0 kubenswrapper[17411]: I0223 13:17:00.216466 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq"] Feb 23 13:17:00.226752 master-0 kubenswrapper[17411]: W0223 13:17:00.226682 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02201935_a02f_4045_9394_80b56aa38918.slice/crio-08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0 WatchSource:0}: Error finding container 08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0: Status 404 returned error can't find the container with id 08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0 Feb 23 13:17:00.323710 master-0 kubenswrapper[17411]: I0223 13:17:00.323632 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 23 13:17:00.329918 master-0 kubenswrapper[17411]: I0223 13:17:00.329808 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 23 13:17:00.337889 master-0 kubenswrapper[17411]: I0223 13:17:00.337646 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:17:00.353216 master-0 kubenswrapper[17411]: I0223 13:17:00.353140 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:17:00.359879 master-0 kubenswrapper[17411]: I0223 13:17:00.359715 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:17:00.371661 master-0 kubenswrapper[17411]: I0223 13:17:00.371640 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 23 13:17:00.384301 master-0 kubenswrapper[17411]: I0223 13:17:00.384218 17411 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:00.457907 master-0 kubenswrapper[17411]: I0223 13:17:00.457700 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:17:00.502314 master-0 kubenswrapper[17411]: I0223 13:17:00.502139 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:17:00.513091 master-0 kubenswrapper[17411]: I0223 13:17:00.513036 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-zmzm6" Feb 23 13:17:00.641026 master-0 kubenswrapper[17411]: I0223 13:17:00.640915 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:17:00.719340 master-0 kubenswrapper[17411]: I0223 13:17:00.719283 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-977zq" Feb 23 13:17:00.720214 master-0 kubenswrapper[17411]: I0223 13:17:00.719897 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 23 13:17:00.726714 master-0 kubenswrapper[17411]: I0223 13:17:00.726671 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:17:00.729868 master-0 kubenswrapper[17411]: I0223 13:17:00.729824 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:17:00.733103 master-0 kubenswrapper[17411]: I0223 13:17:00.733004 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:17:00.801153 master-0 kubenswrapper[17411]: I0223 13:17:00.801092 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:17:00.801955 master-0 kubenswrapper[17411]: I0223 13:17:00.801880 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:17:00.828624 master-0 kubenswrapper[17411]: I0223 13:17:00.828553 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-f5gf8" Feb 23 13:17:00.842596 master-0 kubenswrapper[17411]: I0223 13:17:00.842517 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-8ph7r" Feb 23 13:17:00.849226 master-0 kubenswrapper[17411]: I0223 13:17:00.849163 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29530875-tsdvq_02201935-a02f-4045-9394-80b56aa38918/collect-profiles/0.log" Feb 23 13:17:00.849375 master-0 kubenswrapper[17411]: I0223 13:17:00.849289 17411 generic.go:334] "Generic (PLEG): container finished" podID="02201935-a02f-4045-9394-80b56aa38918" containerID="2423f09b67c796cce3058cc1398b1fb1acdbade08a83f5cff1810b400e00d64a" exitCode=1 Feb 23 13:17:00.849375 master-0 kubenswrapper[17411]: I0223 13:17:00.849349 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" event={"ID":"02201935-a02f-4045-9394-80b56aa38918","Type":"ContainerDied","Data":"2423f09b67c796cce3058cc1398b1fb1acdbade08a83f5cff1810b400e00d64a"} Feb 23 13:17:00.849531 master-0 kubenswrapper[17411]: I0223 13:17:00.849405 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" event={"ID":"02201935-a02f-4045-9394-80b56aa38918","Type":"ContainerStarted","Data":"08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0"} Feb 23 13:17:00.894306 master-0 kubenswrapper[17411]: I0223 13:17:00.894098 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:17:00.958109 master-0 kubenswrapper[17411]: I0223 13:17:00.958002 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:17:00.979278 master-0 kubenswrapper[17411]: I0223 13:17:00.979187 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:17:00.997723 master-0 kubenswrapper[17411]: I0223 13:17:00.997640 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:17:01.017862 master-0 kubenswrapper[17411]: I0223 13:17:01.017772 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-lp4jk" Feb 23 13:17:01.025849 master-0 kubenswrapper[17411]: I0223 13:17:01.025768 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:17:01.033188 master-0 kubenswrapper[17411]: I0223 13:17:01.033130 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2628k" Feb 23 13:17:01.046386 master-0 kubenswrapper[17411]: I0223 13:17:01.046311 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-582hf" Feb 23 13:17:01.057412 master-0 kubenswrapper[17411]: I0223 13:17:01.057204 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:17:01.070160 master-0 kubenswrapper[17411]: I0223 13:17:01.070085 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:17:01.125529 master-0 kubenswrapper[17411]: I0223 13:17:01.125419 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:17:01.178354 master-0 kubenswrapper[17411]: I0223 13:17:01.178125 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qdkmb" Feb 23 13:17:01.260333 master-0 kubenswrapper[17411]: I0223 13:17:01.256968 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:17:01.260333 master-0 kubenswrapper[17411]: I0223 13:17:01.258270 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:17:01.260333 master-0 kubenswrapper[17411]: I0223 13:17:01.259902 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7q6an9sqsfn51" Feb 23 13:17:01.349503 master-0 kubenswrapper[17411]: I0223 13:17:01.349436 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:17:01.359558 master-0 kubenswrapper[17411]: I0223 13:17:01.359477 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 23 13:17:01.505926 master-0 kubenswrapper[17411]: I0223 13:17:01.505842 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:17:01.528476 master-0 kubenswrapper[17411]: I0223 13:17:01.528415 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 23 13:17:01.529520 master-0 kubenswrapper[17411]: I0223 13:17:01.529487 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:17:01.560236 master-0 kubenswrapper[17411]: I0223 13:17:01.560124 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:17:01.569155 master-0 kubenswrapper[17411]: I0223 13:17:01.569100 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4dmq5" Feb 23 13:17:01.681359 master-0 kubenswrapper[17411]: I0223 13:17:01.681311 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:17:01.730389 master-0 kubenswrapper[17411]: I0223 13:17:01.730330 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9jkd0a8djrqaf" Feb 23 13:17:01.748943 master-0 kubenswrapper[17411]: I0223 13:17:01.748833 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:17:01.792654 master-0 kubenswrapper[17411]: I0223 13:17:01.792529 17411 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:17:01.797180 master-0 kubenswrapper[17411]: I0223 13:17:01.797112 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 23 13:17:01.806444 master-0 kubenswrapper[17411]: I0223 13:17:01.806378 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:17:01.834375 master-0 kubenswrapper[17411]: I0223 13:17:01.834307 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 23 13:17:01.874051 master-0 kubenswrapper[17411]: I0223 13:17:01.873986 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:17:01.916605 master-0 kubenswrapper[17411]: I0223 13:17:01.915887 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:17:01.927272 master-0 kubenswrapper[17411]: I0223 13:17:01.927173 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:17:02.011767 master-0 kubenswrapper[17411]: I0223 13:17:02.011703 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:17:02.069307 master-0 kubenswrapper[17411]: I0223 13:17:02.069117 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 23 13:17:02.125610 master-0 kubenswrapper[17411]: I0223 13:17:02.125550 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:17:02.184284 master-0 kubenswrapper[17411]: I0223 13:17:02.184161 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:17:02.188431 master-0 kubenswrapper[17411]: I0223 13:17:02.188372 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:17:02.191088 master-0 kubenswrapper[17411]: I0223 13:17:02.191056 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29530875-tsdvq_02201935-a02f-4045-9394-80b56aa38918/collect-profiles/0.log" Feb 23 13:17:02.191339 master-0 kubenswrapper[17411]: I0223 13:17:02.191159 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:17:02.312703 master-0 kubenswrapper[17411]: I0223 13:17:02.312617 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:17:02.355766 master-0 kubenswrapper[17411]: I0223 13:17:02.354843 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:17:02.374447 master-0 kubenswrapper[17411]: I0223 13:17:02.373569 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume\") pod \"02201935-a02f-4045-9394-80b56aa38918\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " Feb 23 13:17:02.374447 master-0 kubenswrapper[17411]: I0223 13:17:02.373650 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume\") pod \"02201935-a02f-4045-9394-80b56aa38918\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " Feb 23 13:17:02.374447 master-0 kubenswrapper[17411]: I0223 13:17:02.373828 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btvqg\" (UniqueName: \"kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg\") pod \"02201935-a02f-4045-9394-80b56aa38918\" (UID: \"02201935-a02f-4045-9394-80b56aa38918\") " Feb 23 13:17:02.374774 master-0 kubenswrapper[17411]: I0223 13:17:02.374595 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume" (OuterVolumeSpecName: "config-volume") pod "02201935-a02f-4045-9394-80b56aa38918" (UID: "02201935-a02f-4045-9394-80b56aa38918"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:02.377352 master-0 kubenswrapper[17411]: I0223 13:17:02.377275 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02201935-a02f-4045-9394-80b56aa38918" (UID: "02201935-a02f-4045-9394-80b56aa38918"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:02.379022 master-0 kubenswrapper[17411]: I0223 13:17:02.378979 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg" (OuterVolumeSpecName: "kube-api-access-btvqg") pod "02201935-a02f-4045-9394-80b56aa38918" (UID: "02201935-a02f-4045-9394-80b56aa38918"). InnerVolumeSpecName "kube-api-access-btvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:02.381715 master-0 kubenswrapper[17411]: I0223 13:17:02.381663 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:17:02.391917 master-0 kubenswrapper[17411]: I0223 13:17:02.391865 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 23 13:17:02.396759 master-0 kubenswrapper[17411]: I0223 13:17:02.396720 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 23 13:17:02.414446 master-0 kubenswrapper[17411]: I0223 13:17:02.414401 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:17:02.476361 master-0 kubenswrapper[17411]: I0223 13:17:02.475617 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btvqg\" (UniqueName: \"kubernetes.io/projected/02201935-a02f-4045-9394-80b56aa38918-kube-api-access-btvqg\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:02.476361 master-0 kubenswrapper[17411]: I0223 13:17:02.475672 17411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02201935-a02f-4045-9394-80b56aa38918-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:02.476361 master-0 kubenswrapper[17411]: I0223 13:17:02.475685 17411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02201935-a02f-4045-9394-80b56aa38918-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:02.515453 master-0 kubenswrapper[17411]: I0223 13:17:02.515378 17411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:17:02.516096 master-0 kubenswrapper[17411]: I0223 13:17:02.516066 17411 scope.go:117] "RemoveContainer" containerID="2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339" Feb 23 13:17:02.516373 master-0 kubenswrapper[17411]: E0223 13:17:02.516345 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-78784b9d57-r4sf8_openshift-route-controller-manager(dc1620b0-3903-418b-9dd2-1f99bc5a0ae8)\"" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" podUID="dc1620b0-3903-418b-9dd2-1f99bc5a0ae8" Feb 23 13:17:02.572764 master-0 kubenswrapper[17411]: I0223 13:17:02.572695 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:17:02.608095 master-0 kubenswrapper[17411]: I0223 13:17:02.607889 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:17:02.668314 master-0 kubenswrapper[17411]: I0223 13:17:02.668222 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-vhrrg" Feb 23 13:17:02.674620 master-0 kubenswrapper[17411]: I0223 13:17:02.674599 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:17:02.695023 master-0 kubenswrapper[17411]: I0223 13:17:02.694938 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:17:02.733685 master-0 kubenswrapper[17411]: I0223 13:17:02.733629 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 23 13:17:02.744095 master-0 kubenswrapper[17411]: I0223 13:17:02.744004 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:17:02.762038 master-0 kubenswrapper[17411]: I0223 13:17:02.761783 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:17:02.805518 master-0 kubenswrapper[17411]: I0223 13:17:02.804992 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:17:02.817885 master-0 kubenswrapper[17411]: I0223 13:17:02.817825 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:17:02.843466 master-0 kubenswrapper[17411]: I0223 13:17:02.843374 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:17:02.870438 master-0 kubenswrapper[17411]: I0223 13:17:02.870319 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29530875-tsdvq_02201935-a02f-4045-9394-80b56aa38918/collect-profiles/0.log" Feb 23 13:17:02.870438 master-0 kubenswrapper[17411]: I0223 13:17:02.870416 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" Feb 23 13:17:02.885981 master-0 kubenswrapper[17411]: I0223 13:17:02.885917 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-tsdvq" event={"ID":"02201935-a02f-4045-9394-80b56aa38918","Type":"ContainerDied","Data":"08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0"} Feb 23 13:17:02.885981 master-0 kubenswrapper[17411]: I0223 13:17:02.885967 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a1f293e0f56a595972b67850fc70bb6e9ebd07e132006cca23ac7a912073f0" Feb 23 13:17:02.901609 master-0 kubenswrapper[17411]: I0223 13:17:02.898428 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 23 13:17:02.986177 master-0 kubenswrapper[17411]: I0223 13:17:02.986087 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-pdh8w"] Feb 23 13:17:02.986571 master-0 kubenswrapper[17411]: E0223 13:17:02.986528 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02201935-a02f-4045-9394-80b56aa38918" containerName="collect-profiles" Feb 23 13:17:02.986571 master-0 kubenswrapper[17411]: I0223 13:17:02.986560 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="02201935-a02f-4045-9394-80b56aa38918" containerName="collect-profiles" Feb 23 13:17:02.986880 master-0 kubenswrapper[17411]: I0223 13:17:02.986843 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="02201935-a02f-4045-9394-80b56aa38918" containerName="collect-profiles" Feb 23 13:17:02.991823 master-0 kubenswrapper[17411]: I0223 13:17:02.990200 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:02.994763 master-0 kubenswrapper[17411]: I0223 13:17:02.994697 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 13:17:02.994979 master-0 kubenswrapper[17411]: I0223 13:17:02.994920 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 23 13:17:02.995050 master-0 kubenswrapper[17411]: I0223 13:17:02.994978 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-r5fj5" Feb 23 13:17:02.996356 master-0 kubenswrapper[17411]: I0223 13:17:02.996318 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 13:17:03.024058 master-0 kubenswrapper[17411]: I0223 13:17:03.023997 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-pdh8w"] Feb 23 13:17:03.085000 master-0 kubenswrapper[17411]: I0223 13:17:03.084909 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknrk\" (UniqueName: \"kubernetes.io/projected/736dee32-e1e3-4ba4-b0c5-cf54b2af94b1-kube-api-access-hknrk\") pod \"downloads-955b69498-pdh8w\" (UID: \"736dee32-e1e3-4ba4-b0c5-cf54b2af94b1\") " pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:03.126809 master-0 kubenswrapper[17411]: I0223 13:17:03.126669 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 23 13:17:03.179612 master-0 kubenswrapper[17411]: I0223 13:17:03.178588 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:17:03.179892 master-0 kubenswrapper[17411]: I0223 13:17:03.179562 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="afeec80f2ec1ff5cb32c2367912befef" containerName="startup-monitor" containerID="cri-o://0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb" gracePeriod=5 Feb 23 13:17:03.187057 master-0 kubenswrapper[17411]: I0223 13:17:03.186314 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknrk\" (UniqueName: \"kubernetes.io/projected/736dee32-e1e3-4ba4-b0c5-cf54b2af94b1-kube-api-access-hknrk\") pod \"downloads-955b69498-pdh8w\" (UID: \"736dee32-e1e3-4ba4-b0c5-cf54b2af94b1\") " pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:03.211995 master-0 kubenswrapper[17411]: I0223 13:17:03.211910 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:17:03.227988 master-0 kubenswrapper[17411]: I0223 13:17:03.227893 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 23 13:17:03.228271 master-0 kubenswrapper[17411]: I0223 13:17:03.228197 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:17:03.293775 master-0 kubenswrapper[17411]: I0223 13:17:03.293700 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:17:03.296218 master-0 kubenswrapper[17411]: I0223 13:17:03.296146 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pzqs9" Feb 23 13:17:03.310788 master-0 kubenswrapper[17411]: I0223 13:17:03.310714 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:17:03.313320 master-0 kubenswrapper[17411]: I0223 13:17:03.313265 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:17:03.365682 master-0 kubenswrapper[17411]: I0223 13:17:03.365565 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:17:03.365999 master-0 kubenswrapper[17411]: I0223 13:17:03.365785 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wt8dr" Feb 23 13:17:03.379122 master-0 kubenswrapper[17411]: I0223 13:17:03.378998 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:17:03.408419 master-0 kubenswrapper[17411]: I0223 13:17:03.408348 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:17:03.410021 master-0 kubenswrapper[17411]: I0223 13:17:03.409981 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknrk\" (UniqueName: \"kubernetes.io/projected/736dee32-e1e3-4ba4-b0c5-cf54b2af94b1-kube-api-access-hknrk\") pod \"downloads-955b69498-pdh8w\" (UID: \"736dee32-e1e3-4ba4-b0c5-cf54b2af94b1\") " pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:03.524085 master-0 kubenswrapper[17411]: I0223 13:17:03.524031 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 23 13:17:03.533709 master-0 kubenswrapper[17411]: I0223 13:17:03.533655 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:17:03.575736 master-0 kubenswrapper[17411]: I0223 13:17:03.575693 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:17:03.602217 master-0 kubenswrapper[17411]: I0223 13:17:03.602158 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:17:03.667594 master-0 kubenswrapper[17411]: I0223 13:17:03.667490 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 23 13:17:03.668269 master-0 kubenswrapper[17411]: I0223 13:17:03.668236 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:17:03.674137 master-0 kubenswrapper[17411]: I0223 13:17:03.674113 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:03.731095 master-0 kubenswrapper[17411]: I0223 13:17:03.731051 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:17:03.750531 master-0 kubenswrapper[17411]: I0223 13:17:03.750496 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:17:03.788675 master-0 kubenswrapper[17411]: I0223 13:17:03.786771 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:17:03.807113 master-0 kubenswrapper[17411]: I0223 13:17:03.806019 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-kcb76" Feb 23 13:17:03.828085 master-0 kubenswrapper[17411]: I0223 13:17:03.828055 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-n9dxs" Feb 23 13:17:03.869348 master-0 kubenswrapper[17411]: I0223 13:17:03.868535 17411 scope.go:117] "RemoveContainer" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" Feb 23 13:17:03.869348 master-0 kubenswrapper[17411]: E0223 13:17:03.868776 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:17:03.892587 master-0 kubenswrapper[17411]: I0223 13:17:03.869368 17411 scope.go:117] "RemoveContainer" containerID="fec2b56ffa3c2fda91463659eb4be75b35169045cf2435badc161811557532bd" Feb 23 13:17:03.911559 master-0 kubenswrapper[17411]: I0223 13:17:03.911505 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 23 13:17:03.915526 master-0 kubenswrapper[17411]: I0223 13:17:03.915469 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:17:03.945895 master-0 kubenswrapper[17411]: I0223 13:17:03.945838 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:17:04.344643 master-0 kubenswrapper[17411]: I0223 13:17:04.344594 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-pdh8w"] Feb 23 13:17:04.346763 master-0 kubenswrapper[17411]: W0223 13:17:04.346709 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736dee32_e1e3_4ba4_b0c5_cf54b2af94b1.slice/crio-590cf04a6d86a8fb3bd9ca7adfb8ebb5aed62917ff0595831b6503834cd98131 WatchSource:0}: Error finding container 590cf04a6d86a8fb3bd9ca7adfb8ebb5aed62917ff0595831b6503834cd98131: Status 404 returned error can't find the container with id 590cf04a6d86a8fb3bd9ca7adfb8ebb5aed62917ff0595831b6503834cd98131 Feb 23 13:17:04.367874 master-0 kubenswrapper[17411]: I0223 13:17:04.364788 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-4q8qn" Feb 23 13:17:04.566065 master-0 kubenswrapper[17411]: I0223 13:17:04.565999 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:17:04.587851 master-0 kubenswrapper[17411]: I0223 13:17:04.587808 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-twm6g" Feb 23 13:17:04.696260 master-0 kubenswrapper[17411]: I0223 13:17:04.696184 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:17:04.771473 master-0 kubenswrapper[17411]: I0223 13:17:04.771412 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:17:04.827320 master-0 kubenswrapper[17411]: I0223 13:17:04.826798 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:17:04.839913 master-0 kubenswrapper[17411]: I0223 13:17:04.839876 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 23 13:17:04.874885 master-0 kubenswrapper[17411]: I0223 13:17:04.874779 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:17:04.903208 master-0 kubenswrapper[17411]: I0223 13:17:04.903147 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-ccvpn_3ab71705-d574-4f95-b3fc-9f7cf5e8a557/kube-storage-version-migrator-operator/2.log" Feb 23 13:17:04.903424 master-0 kubenswrapper[17411]: I0223 13:17:04.903264 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-ccvpn" event={"ID":"3ab71705-d574-4f95-b3fc-9f7cf5e8a557","Type":"ContainerStarted","Data":"99ca311859c66c28c1ab1e76462091b83bcf867393f945c7158b4cba06793338"} Feb 23 13:17:04.905313 master-0 kubenswrapper[17411]: I0223 13:17:04.904338 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-pdh8w" event={"ID":"736dee32-e1e3-4ba4-b0c5-cf54b2af94b1","Type":"ContainerStarted","Data":"590cf04a6d86a8fb3bd9ca7adfb8ebb5aed62917ff0595831b6503834cd98131"} Feb 23 13:17:05.099855 master-0 kubenswrapper[17411]: I0223 13:17:05.099724 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:17:05.301647 master-0 kubenswrapper[17411]: I0223 13:17:05.301591 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:17:05.314595 master-0 kubenswrapper[17411]: I0223 13:17:05.314546 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:17:05.356532 master-0 kubenswrapper[17411]: I0223 13:17:05.356404 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:17:05.582064 master-0 kubenswrapper[17411]: I0223 13:17:05.581058 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:17:05.640653 master-0 kubenswrapper[17411]: I0223 13:17:05.640442 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 23 13:17:05.717918 master-0 kubenswrapper[17411]: I0223 13:17:05.717844 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 23 13:17:05.738359 master-0 kubenswrapper[17411]: I0223 13:17:05.738307 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:17:05.763800 master-0 kubenswrapper[17411]: I0223 13:17:05.763739 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:17:05.828267 master-0 kubenswrapper[17411]: I0223 13:17:05.821432 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:17:05.853270 master-0 kubenswrapper[17411]: I0223 13:17:05.852971 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:17:05.869496 master-0 kubenswrapper[17411]: I0223 13:17:05.869457 17411 scope.go:117] "RemoveContainer" containerID="eaf5c82575ca53cf64738eafa679d56a86938238183995384c4ed1f6782f3ea2" Feb 23 13:17:05.869793 master-0 kubenswrapper[17411]: I0223 13:17:05.869764 17411 scope.go:117] "RemoveContainer" containerID="58697c87cd4c1a073964d8c5dbb45b8508190c35e0ffc3e1b2ec68e7b6317288" Feb 23 13:17:05.869940 master-0 kubenswrapper[17411]: I0223 13:17:05.869900 17411 scope.go:117] "RemoveContainer" containerID="b51fc341743d0ee14779ec259987403cb18ccfb83872ba04b66accc494822766" Feb 23 13:17:05.873477 master-0 kubenswrapper[17411]: I0223 13:17:05.873455 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:17:05.880051 master-0 kubenswrapper[17411]: I0223 13:17:05.878663 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 23 13:17:06.147098 master-0 kubenswrapper[17411]: I0223 13:17:06.147030 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wnzv6" Feb 23 13:17:06.431850 master-0 kubenswrapper[17411]: I0223 13:17:06.431719 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:17:06.471534 master-0 kubenswrapper[17411]: I0223 13:17:06.471269 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8bvc9" Feb 23 13:17:06.561338 master-0 kubenswrapper[17411]: I0223 13:17:06.561205 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-zwmzz" Feb 23 13:17:06.676230 master-0 kubenswrapper[17411]: I0223 13:17:06.676173 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-dldvx" Feb 23 13:17:06.767829 master-0 kubenswrapper[17411]: I0223 13:17:06.767779 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 23 13:17:06.847680 master-0 kubenswrapper[17411]: I0223 13:17:06.847592 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-zj94f" Feb 23 13:17:06.875030 master-0 kubenswrapper[17411]: I0223 13:17:06.874967 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:17:06.875030 master-0 kubenswrapper[17411]: I0223 13:17:06.875034 17411 scope.go:117] "RemoveContainer" containerID="9b83034b1e523498c93eb4e5fde2c67e0c10856a13b30b5b22d21e82983a70f1" Feb 23 13:17:06.875322 master-0 kubenswrapper[17411]: E0223 13:17:06.875228 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:17:06.940517 master-0 kubenswrapper[17411]: I0223 13:17:06.940478 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-ck859_f88d6ed3-c0a6-4eef-b80c-417994cf69b0/cluster-storage-operator/1.log" Feb 23 13:17:06.940747 master-0 kubenswrapper[17411]: I0223 13:17:06.940582 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-ck859" event={"ID":"f88d6ed3-c0a6-4eef-b80c-417994cf69b0","Type":"ContainerStarted","Data":"7b9ac792ac8b4b2e20886064e88c7c8c8d9b3230ab4f38cf3caf77f951eacf77"} Feb 23 13:17:06.943396 master-0 kubenswrapper[17411]: I0223 13:17:06.943354 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-t9gx8_99399ebb-c95f-4663-b3b6-f5dfabf47fcf/openshift-controller-manager-operator/2.log" Feb 23 13:17:06.943496 master-0 kubenswrapper[17411]: I0223 13:17:06.943456 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-t9gx8" event={"ID":"99399ebb-c95f-4663-b3b6-f5dfabf47fcf","Type":"ContainerStarted","Data":"a3896ebf3ae8996c256a4ce5e6469b2e95934143f84004dfddbcc5bb4066eb3d"} Feb 23 13:17:06.950637 master-0 kubenswrapper[17411]: I0223 13:17:06.949969 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-dgldn_4a4b185e-17da-4711-a7b2-c2a9e1cd7b30/kube-apiserver-operator/2.log" Feb 23 13:17:06.950637 master-0 kubenswrapper[17411]: I0223 13:17:06.950043 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-dgldn" event={"ID":"4a4b185e-17da-4711-a7b2-c2a9e1cd7b30","Type":"ContainerStarted","Data":"efcd35078f287bbf19c5bf3b11460eea2f71be202cf38643a1dba85c18190365"} Feb 23 13:17:06.983198 master-0 kubenswrapper[17411]: I0223 13:17:06.983166 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 23 13:17:07.030504 master-0 kubenswrapper[17411]: I0223 13:17:07.030400 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-z5ckf" Feb 23 13:17:07.257328 master-0 kubenswrapper[17411]: I0223 13:17:07.255604 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 23 13:17:07.609263 master-0 kubenswrapper[17411]: I0223 13:17:07.607952 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 23 13:17:07.741451 master-0 kubenswrapper[17411]: I0223 13:17:07.740315 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 23 13:17:07.868490 master-0 kubenswrapper[17411]: I0223 13:17:07.868352 17411 scope.go:117] "RemoveContainer" containerID="fef4f8449d382c2b35398416206a546296a87b3c5b9bd1199e39bfceb5c14dae" Feb 23 13:17:07.957801 master-0 kubenswrapper[17411]: I0223 13:17:07.957751 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-sx924_b7585f9f-12e5-451b-beeb-db43ae778f25/csi-snapshot-controller-operator/1.log" Feb 23 13:17:07.957906 master-0 kubenswrapper[17411]: I0223 13:17:07.957826 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-sx924" event={"ID":"b7585f9f-12e5-451b-beeb-db43ae778f25","Type":"ContainerStarted","Data":"5636b49d946524e4a22c125e63f292b34c4c405491eeb7a74e70d4244d7fc71f"} Feb 23 13:17:08.316607 master-0 kubenswrapper[17411]: I0223 13:17:08.316545 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:17:08.374930 master-0 kubenswrapper[17411]: I0223 13:17:08.374860 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:17:08.479760 master-0 kubenswrapper[17411]: I0223 13:17:08.479713 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:17:08.491957 master-0 kubenswrapper[17411]: I0223 13:17:08.491908 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:17:08.744787 master-0 kubenswrapper[17411]: I0223 13:17:08.744659 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:08.745026 master-0 kubenswrapper[17411]: E0223 13:17:08.745007 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeec80f2ec1ff5cb32c2367912befef" containerName="startup-monitor" Feb 23 13:17:08.745026 master-0 kubenswrapper[17411]: I0223 13:17:08.745026 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeec80f2ec1ff5cb32c2367912befef" containerName="startup-monitor" Feb 23 13:17:08.745236 master-0 kubenswrapper[17411]: I0223 13:17:08.745211 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeec80f2ec1ff5cb32c2367912befef" containerName="startup-monitor" Feb 23 13:17:08.745745 master-0 kubenswrapper[17411]: I0223 13:17:08.745725 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.747820 master-0 kubenswrapper[17411]: I0223 13:17:08.747745 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 13:17:08.747998 master-0 kubenswrapper[17411]: I0223 13:17:08.747969 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 13:17:08.748176 master-0 kubenswrapper[17411]: I0223 13:17:08.748122 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 13:17:08.748236 master-0 kubenswrapper[17411]: I0223 13:17:08.748137 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 13:17:08.749407 master-0 kubenswrapper[17411]: I0223 13:17:08.749199 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 13:17:08.764317 master-0 kubenswrapper[17411]: I0223 13:17:08.764240 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:08.796130 master-0 kubenswrapper[17411]: I0223 13:17:08.796062 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 23 13:17:08.849856 master-0 kubenswrapper[17411]: I0223 13:17:08.849812 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_afeec80f2ec1ff5cb32c2367912befef/startup-monitor/0.log" Feb 23 13:17:08.850028 master-0 kubenswrapper[17411]: I0223 13:17:08.849907 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:08.869446 master-0 kubenswrapper[17411]: I0223 13:17:08.869392 17411 scope.go:117] "RemoveContainer" containerID="a46afb690c12f34d591fbefec336bbc94039270416c52a883ecc6b6372765700" Feb 23 13:17:08.870212 master-0 kubenswrapper[17411]: I0223 13:17:08.869549 17411 scope.go:117] "RemoveContainer" containerID="b4325f84094f6a5f8ce69935fd5dcef125ec5b0e7208b70b7184af2ce6c4e6e7" Feb 23 13:17:08.870711 master-0 kubenswrapper[17411]: I0223 13:17:08.870623 17411 scope.go:117] "RemoveContainer" containerID="67d44d75e83e1738383d940ce092f767380c2ef842af8140e42e9f6428546c93" Feb 23 13:17:08.870711 master-0 kubenswrapper[17411]: I0223 13:17:08.870662 17411 scope.go:117] "RemoveContainer" containerID="1a0344d531e84ba87458cf9e245595bf26beb8556c42c2a98575065196b12964" Feb 23 13:17:08.887103 master-0 kubenswrapper[17411]: I0223 13:17:08.887058 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 23 13:17:08.911654 master-0 kubenswrapper[17411]: I0223 13:17:08.911040 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:17:08.911654 master-0 kubenswrapper[17411]: I0223 13:17:08.911092 17411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="16158634-5829-4ad1-95d8-d9752701539d" Feb 23 13:17:08.928991 master-0 kubenswrapper[17411]: I0223 13:17:08.928893 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:17:08.929697 master-0 kubenswrapper[17411]: I0223 13:17:08.929654 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.929772 master-0 kubenswrapper[17411]: I0223 13:17:08.929736 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.929822 master-0 kubenswrapper[17411]: I0223 13:17:08.929796 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfznz\" (UniqueName: \"kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.929865 master-0 kubenswrapper[17411]: I0223 13:17:08.929851 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.929916 master-0 kubenswrapper[17411]: I0223 13:17:08.929881 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.929966 master-0 kubenswrapper[17411]: I0223 13:17:08.929914 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:08.941338 master-0 kubenswrapper[17411]: I0223 13:17:08.941286 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 23 13:17:08.941338 master-0 kubenswrapper[17411]: I0223 13:17:08.941332 17411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="16158634-5829-4ad1-95d8-d9752701539d" Feb 23 13:17:08.970995 master-0 kubenswrapper[17411]: I0223 13:17:08.970947 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_afeec80f2ec1ff5cb32c2367912befef/startup-monitor/0.log" Feb 23 13:17:08.971102 master-0 kubenswrapper[17411]: I0223 13:17:08.971006 17411 generic.go:334] "Generic (PLEG): container finished" podID="afeec80f2ec1ff5cb32c2367912befef" containerID="0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb" exitCode=137 Feb 23 13:17:08.971102 master-0 kubenswrapper[17411]: I0223 13:17:08.971088 17411 scope.go:117] "RemoveContainer" containerID="0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb" Feb 23 13:17:08.971277 master-0 kubenswrapper[17411]: I0223 13:17:08.971197 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 23 13:17:08.975578 master-0 kubenswrapper[17411]: I0223 13:17:08.975551 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-6wk86_ae1799b6-85b0-4aed-8835-35cb3d8d1109/openshift-apiserver-operator/2.log" Feb 23 13:17:08.975678 master-0 kubenswrapper[17411]: I0223 13:17:08.975606 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-6wk86" event={"ID":"ae1799b6-85b0-4aed-8835-35cb3d8d1109","Type":"ContainerStarted","Data":"11c1a4d573cd03f80fbade0d8c761c36e22a4ed2d4e4f1e76325616224753684"} Feb 23 13:17:09.022798 master-0 kubenswrapper[17411]: I0223 13:17:09.022751 17411 scope.go:117] "RemoveContainer" containerID="0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb" Feb 23 13:17:09.023617 master-0 kubenswrapper[17411]: E0223 13:17:09.023513 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb\": container with ID starting with 0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb not found: ID does not exist" containerID="0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb" Feb 23 13:17:09.023688 master-0 kubenswrapper[17411]: I0223 13:17:09.023613 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb"} err="failed to get container status \"0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb\": rpc error: code = NotFound desc = could not find container \"0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb\": container with ID starting with 0b8bf75868c56b3fe4a4cd3e6f70cc025a94d5c152b2636fdbf0e5e715bdf2eb not found: ID does not exist" Feb 23 13:17:09.031080 master-0 kubenswrapper[17411]: I0223 13:17:09.031043 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests\") pod \"afeec80f2ec1ff5cb32c2367912befef\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " Feb 23 13:17:09.031153 master-0 kubenswrapper[17411]: I0223 13:17:09.031135 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log\") pod \"afeec80f2ec1ff5cb32c2367912befef\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " Feb 23 13:17:09.031189 master-0 kubenswrapper[17411]: I0223 13:17:09.031167 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock\") pod \"afeec80f2ec1ff5cb32c2367912befef\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " Feb 23 13:17:09.031443 master-0 kubenswrapper[17411]: I0223 13:17:09.031383 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir\") pod \"afeec80f2ec1ff5cb32c2367912befef\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " Feb 23 13:17:09.031443 master-0 kubenswrapper[17411]: I0223 13:17:09.031430 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir\") pod \"afeec80f2ec1ff5cb32c2367912befef\" (UID: \"afeec80f2ec1ff5cb32c2367912befef\") " Feb 23 13:17:09.032357 master-0 kubenswrapper[17411]: I0223 13:17:09.032306 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032413 master-0 kubenswrapper[17411]: I0223 13:17:09.032399 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032494 master-0 kubenswrapper[17411]: I0223 13:17:09.032476 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfznz\" (UniqueName: \"kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032611 master-0 kubenswrapper[17411]: I0223 13:17:09.032560 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032611 master-0 kubenswrapper[17411]: I0223 13:17:09.032584 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032707 master-0 kubenswrapper[17411]: I0223 13:17:09.032617 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.032991 master-0 kubenswrapper[17411]: I0223 13:17:09.032929 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests" (OuterVolumeSpecName: "manifests") pod "afeec80f2ec1ff5cb32c2367912befef" (UID: "afeec80f2ec1ff5cb32c2367912befef"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:09.033033 master-0 kubenswrapper[17411]: I0223 13:17:09.032999 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log" (OuterVolumeSpecName: "var-log") pod "afeec80f2ec1ff5cb32c2367912befef" (UID: "afeec80f2ec1ff5cb32c2367912befef"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:09.033033 master-0 kubenswrapper[17411]: I0223 13:17:09.033023 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock" (OuterVolumeSpecName: "var-lock") pod "afeec80f2ec1ff5cb32c2367912befef" (UID: "afeec80f2ec1ff5cb32c2367912befef"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:09.034188 master-0 kubenswrapper[17411]: I0223 13:17:09.034164 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "afeec80f2ec1ff5cb32c2367912befef" (UID: "afeec80f2ec1ff5cb32c2367912befef"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:09.034282 master-0 kubenswrapper[17411]: E0223 13:17:09.034269 17411 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 23 13:17:09.034339 master-0 kubenswrapper[17411]: E0223 13:17:09.034321 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert podName:481f3444-6cc7-4ae0-89cd-64fb776b4bf3 nodeName:}" failed. No retries permitted until 2026-02-23 13:17:09.534300288 +0000 UTC m=+622.961806885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert") pod "console-786c4f4c85-kvlm6" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3") : secret "console-serving-cert" not found Feb 23 13:17:09.036205 master-0 kubenswrapper[17411]: I0223 13:17:09.036167 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.038632 master-0 kubenswrapper[17411]: I0223 13:17:09.038591 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.039289 master-0 kubenswrapper[17411]: I0223 13:17:09.039182 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.040845 master-0 kubenswrapper[17411]: I0223 13:17:09.040783 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "afeec80f2ec1ff5cb32c2367912befef" (UID: "afeec80f2ec1ff5cb32c2367912befef"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:09.042409 master-0 kubenswrapper[17411]: I0223 13:17:09.042371 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.134714 master-0 kubenswrapper[17411]: I0223 13:17:09.134654 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfznz\" (UniqueName: \"kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.135072 master-0 kubenswrapper[17411]: I0223 13:17:09.135004 17411 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-manifests\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:09.135114 master-0 kubenswrapper[17411]: I0223 13:17:09.135074 17411 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-log\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:09.135114 master-0 kubenswrapper[17411]: I0223 13:17:09.135091 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:09.135114 master-0 kubenswrapper[17411]: I0223 13:17:09.135104 17411 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:09.135258 master-0 kubenswrapper[17411]: I0223 13:17:09.135118 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afeec80f2ec1ff5cb32c2367912befef-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:09.280874 master-0 kubenswrapper[17411]: I0223 13:17:09.280819 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:17:09.541519 master-0 kubenswrapper[17411]: I0223 13:17:09.540903 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:09.541519 master-0 kubenswrapper[17411]: E0223 13:17:09.541185 17411 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 23 13:17:09.541519 master-0 kubenswrapper[17411]: E0223 13:17:09.541269 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert podName:481f3444-6cc7-4ae0-89cd-64fb776b4bf3 nodeName:}" failed. No retries permitted until 2026-02-23 13:17:10.541232296 +0000 UTC m=+623.968738893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert") pod "console-786c4f4c85-kvlm6" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3") : secret "console-serving-cert" not found Feb 23 13:17:09.987290 master-0 kubenswrapper[17411]: I0223 13:17:09.985505 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-p5488_c2b80534-3c9d-4ddb-9215-d50d63294c7c/openshift-config-operator/4.log" Feb 23 13:17:09.987290 master-0 kubenswrapper[17411]: I0223 13:17:09.985871 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" event={"ID":"c2b80534-3c9d-4ddb-9215-d50d63294c7c","Type":"ContainerStarted","Data":"c285b235a35dcf877accca5db5c7d4e1182ab579cbf0a5d561a5962a0248b971"} Feb 23 13:17:09.987290 master-0 kubenswrapper[17411]: I0223 13:17:09.986437 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:17:09.994325 master-0 kubenswrapper[17411]: I0223 13:17:09.993840 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-jpf5n_b1970ec8-620e-4529-bf3b-1cf9a52c27d3/kube-controller-manager-operator/2.log" Feb 23 13:17:09.994325 master-0 kubenswrapper[17411]: I0223 13:17:09.993937 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-jpf5n" event={"ID":"b1970ec8-620e-4529-bf3b-1cf9a52c27d3","Type":"ContainerStarted","Data":"cd2d8ea7fbe0efb213b9d8ac913d88c02fa800dd52599cc412e072fc21543c50"} Feb 23 13:17:10.013690 master-0 kubenswrapper[17411]: I0223 13:17:10.013628 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-rvccp_25b5540c-da7d-4b6f-a15f-394451f4674e/service-ca-operator/2.log" Feb 23 13:17:10.013940 master-0 kubenswrapper[17411]: I0223 13:17:10.013759 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-rvccp" event={"ID":"25b5540c-da7d-4b6f-a15f-394451f4674e","Type":"ContainerStarted","Data":"4aef158a1eb0bf11a77b42bc3720050e5859daa6d5c6fc200f93babc7c9ef4a4"} Feb 23 13:17:10.021992 master-0 kubenswrapper[17411]: I0223 13:17:10.021931 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-576b4d78bd-nds57_71a07622-3038-4b8c-b6bb-5f28a4115012/service-ca-controller/1.log" Feb 23 13:17:10.022217 master-0 kubenswrapper[17411]: I0223 13:17:10.022012 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nds57" event={"ID":"71a07622-3038-4b8c-b6bb-5f28a4115012","Type":"ContainerStarted","Data":"aee1bd78a9d96283dbce70f445061107df3ce906ee1b7d0e888f41af22529b43"} Feb 23 13:17:10.043363 master-0 kubenswrapper[17411]: I0223 13:17:10.043305 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg"] Feb 23 13:17:10.044608 master-0 kubenswrapper[17411]: I0223 13:17:10.044585 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.050653 master-0 kubenswrapper[17411]: I0223 13:17:10.050589 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:17:10.055331 master-0 kubenswrapper[17411]: I0223 13:17:10.054860 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg"] Feb 23 13:17:10.153891 master-0 kubenswrapper[17411]: I0223 13:17:10.153818 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.154128 master-0 kubenswrapper[17411]: I0223 13:17:10.153907 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.154128 master-0 kubenswrapper[17411]: I0223 13:17:10.153976 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrgw\" (UniqueName: \"kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.255615 master-0 kubenswrapper[17411]: I0223 13:17:10.255491 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.255615 master-0 kubenswrapper[17411]: I0223 13:17:10.255571 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.255834 master-0 kubenswrapper[17411]: I0223 13:17:10.255616 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrgw\" (UniqueName: \"kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.256783 master-0 kubenswrapper[17411]: I0223 13:17:10.256752 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.259042 master-0 kubenswrapper[17411]: I0223 13:17:10.258991 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.272792 master-0 kubenswrapper[17411]: I0223 13:17:10.272727 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrgw\" (UniqueName: \"kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw\") pod \"collect-profiles-29530875-4pmtg\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.373481 master-0 kubenswrapper[17411]: I0223 13:17:10.373392 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:10.561399 master-0 kubenswrapper[17411]: I0223 13:17:10.559814 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:10.565658 master-0 kubenswrapper[17411]: I0223 13:17:10.564041 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"console-786c4f4c85-kvlm6\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:10.643902 master-0 kubenswrapper[17411]: I0223 13:17:10.643777 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:10.829720 master-0 kubenswrapper[17411]: I0223 13:17:10.829523 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg"] Feb 23 13:17:10.879135 master-0 kubenswrapper[17411]: I0223 13:17:10.879081 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeec80f2ec1ff5cb32c2367912befef" path="/var/lib/kubelet/pods/afeec80f2ec1ff5cb32c2367912befef/volumes" Feb 23 13:17:11.045093 master-0 kubenswrapper[17411]: I0223 13:17:11.044900 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" event={"ID":"0ab5b849-08ac-4144-814f-78cd765574e3","Type":"ContainerStarted","Data":"1a34cd55934a645dc894ee7263783bb77b9f9fb509214f70844ef6cd561913f5"} Feb 23 13:17:11.045093 master-0 kubenswrapper[17411]: I0223 13:17:11.045005 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" event={"ID":"0ab5b849-08ac-4144-814f-78cd765574e3","Type":"ContainerStarted","Data":"1baa0ab4b4d152d9b2536e79d035be16565a8b537b9a002a9b0824730751ea9c"} Feb 23 13:17:11.072873 master-0 kubenswrapper[17411]: I0223 13:17:11.070982 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" podStartSLOduration=1.070963292 podStartE2EDuration="1.070963292s" podCreationTimestamp="2026-02-23 13:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:11.069633094 +0000 UTC m=+624.497139731" watchObservedRunningTime="2026-02-23 13:17:11.070963292 +0000 UTC m=+624.498469889" Feb 23 13:17:11.104322 master-0 kubenswrapper[17411]: I0223 13:17:11.104239 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:12.054583 master-0 kubenswrapper[17411]: I0223 13:17:12.054505 17411 generic.go:334] "Generic (PLEG): container finished" podID="0ab5b849-08ac-4144-814f-78cd765574e3" containerID="1a34cd55934a645dc894ee7263783bb77b9f9fb509214f70844ef6cd561913f5" exitCode=0 Feb 23 13:17:12.055305 master-0 kubenswrapper[17411]: I0223 13:17:12.054592 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" event={"ID":"0ab5b849-08ac-4144-814f-78cd765574e3","Type":"ContainerDied","Data":"1a34cd55934a645dc894ee7263783bb77b9f9fb509214f70844ef6cd561913f5"} Feb 23 13:17:12.056996 master-0 kubenswrapper[17411]: I0223 13:17:12.056962 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786c4f4c85-kvlm6" event={"ID":"481f3444-6cc7-4ae0-89cd-64fb776b4bf3","Type":"ContainerStarted","Data":"ca5a21740beffc32a56e62658152d32adb3fd0ae26afed4a705b227dcfbd1d31"} Feb 23 13:17:13.870620 master-0 kubenswrapper[17411]: I0223 13:17:13.869931 17411 scope.go:117] "RemoveContainer" containerID="1152d28f4c1f4afcb3b6fce62c91926a60ad42ad6accdc15babf7a5ac6cf43c3" Feb 23 13:17:13.870620 master-0 kubenswrapper[17411]: I0223 13:17:13.870027 17411 scope.go:117] "RemoveContainer" containerID="2c1de830984a0507238799826eac1f7e8b3e85789c4103320e7f2ff4a2d7b339" Feb 23 13:17:13.924281 master-0 kubenswrapper[17411]: I0223 13:17:13.923348 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-p5488" Feb 23 13:17:14.078505 master-0 kubenswrapper[17411]: I0223 13:17:14.078463 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/1.log" Feb 23 13:17:14.079270 master-0 kubenswrapper[17411]: I0223 13:17:14.079202 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/0.log" Feb 23 13:17:14.079356 master-0 kubenswrapper[17411]: I0223 13:17:14.079320 17411 generic.go:334] "Generic (PLEG): container finished" podID="a3dfb271-a659-45e0-b51d-5e99ec43b555" containerID="edc1773c982d6063298896af34c17dae7d495b67e0652db28d6d5baf5d894ae5" exitCode=1 Feb 23 13:17:14.079422 master-0 kubenswrapper[17411]: I0223 13:17:14.079375 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerDied","Data":"edc1773c982d6063298896af34c17dae7d495b67e0652db28d6d5baf5d894ae5"} Feb 23 13:17:14.079466 master-0 kubenswrapper[17411]: I0223 13:17:14.079435 17411 scope.go:117] "RemoveContainer" containerID="351e4db24f64009fc4f824529f2660bb02ed2356f12336ec3301a4d672483590" Feb 23 13:17:14.080044 master-0 kubenswrapper[17411]: I0223 13:17:14.080021 17411 scope.go:117] "RemoveContainer" containerID="edc1773c982d6063298896af34c17dae7d495b67e0652db28d6d5baf5d894ae5" Feb 23 13:17:14.080368 master-0 kubenswrapper[17411]: E0223 13:17:14.080339 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-node-tuning-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-node-tuning-operator pod=cluster-node-tuning-operator-bcf775fc9-6llwl_openshift-cluster-node-tuning-operator(a3dfb271-a659-45e0-b51d-5e99ec43b555)\"" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" podUID="a3dfb271-a659-45e0-b51d-5e99ec43b555" Feb 23 13:17:15.097660 master-0 kubenswrapper[17411]: I0223 13:17:15.094619 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" event={"ID":"0ab5b849-08ac-4144-814f-78cd765574e3","Type":"ContainerDied","Data":"1baa0ab4b4d152d9b2536e79d035be16565a8b537b9a002a9b0824730751ea9c"} Feb 23 13:17:15.097660 master-0 kubenswrapper[17411]: I0223 13:17:15.094676 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1baa0ab4b4d152d9b2536e79d035be16565a8b537b9a002a9b0824730751ea9c" Feb 23 13:17:15.156274 master-0 kubenswrapper[17411]: I0223 13:17:15.153152 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:15.242799 master-0 kubenswrapper[17411]: I0223 13:17:15.242736 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlrgw\" (UniqueName: \"kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw\") pod \"0ab5b849-08ac-4144-814f-78cd765574e3\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " Feb 23 13:17:15.243010 master-0 kubenswrapper[17411]: I0223 13:17:15.242903 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume\") pod \"0ab5b849-08ac-4144-814f-78cd765574e3\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " Feb 23 13:17:15.243010 master-0 kubenswrapper[17411]: I0223 13:17:15.242994 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume\") pod \"0ab5b849-08ac-4144-814f-78cd765574e3\" (UID: \"0ab5b849-08ac-4144-814f-78cd765574e3\") " Feb 23 13:17:15.247276 master-0 kubenswrapper[17411]: I0223 13:17:15.243698 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ab5b849-08ac-4144-814f-78cd765574e3" (UID: "0ab5b849-08ac-4144-814f-78cd765574e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:15.247276 master-0 kubenswrapper[17411]: I0223 13:17:15.246505 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw" (OuterVolumeSpecName: "kube-api-access-dlrgw") pod "0ab5b849-08ac-4144-814f-78cd765574e3" (UID: "0ab5b849-08ac-4144-814f-78cd765574e3"). InnerVolumeSpecName "kube-api-access-dlrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:15.250279 master-0 kubenswrapper[17411]: I0223 13:17:15.249910 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ab5b849-08ac-4144-814f-78cd765574e3" (UID: "0ab5b849-08ac-4144-814f-78cd765574e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:15.350389 master-0 kubenswrapper[17411]: I0223 13:17:15.344222 17411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ab5b849-08ac-4144-814f-78cd765574e3-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:15.350389 master-0 kubenswrapper[17411]: I0223 13:17:15.344289 17411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ab5b849-08ac-4144-814f-78cd765574e3-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:15.350389 master-0 kubenswrapper[17411]: I0223 13:17:15.344301 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlrgw\" (UniqueName: \"kubernetes.io/projected/0ab5b849-08ac-4144-814f-78cd765574e3-kube-api-access-dlrgw\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:16.111905 master-0 kubenswrapper[17411]: I0223 13:17:16.111784 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-ld4gj_f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8/authentication-operator/3.log" Feb 23 13:17:16.112405 master-0 kubenswrapper[17411]: I0223 13:17:16.111929 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-ld4gj" event={"ID":"f5400b52-ab9c-4bc3-8ea5-80fb3e1b37f8","Type":"ContainerStarted","Data":"6d1df49ba34700ae002bca8c747af8f48b5ef1a7f70f81da31aa208e63838cf1"} Feb 23 13:17:16.118505 master-0 kubenswrapper[17411]: I0223 13:17:16.115394 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-78784b9d57-r4sf8_dc1620b0-3903-418b-9dd2-1f99bc5a0ae8/route-controller-manager/1.log" Feb 23 13:17:16.118505 master-0 kubenswrapper[17411]: I0223 13:17:16.115475 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" event={"ID":"dc1620b0-3903-418b-9dd2-1f99bc5a0ae8","Type":"ContainerStarted","Data":"04bf9dba6c3ae7ac67d2505c2730139e4eb6e3cd186670ddd3b6d3c1972ea1b5"} Feb 23 13:17:16.118505 master-0 kubenswrapper[17411]: I0223 13:17:16.116404 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:17:16.119473 master-0 kubenswrapper[17411]: I0223 13:17:16.119234 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/1.log" Feb 23 13:17:16.120673 master-0 kubenswrapper[17411]: I0223 13:17:16.120647 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78784b9d57-r4sf8" Feb 23 13:17:16.124661 master-0 kubenswrapper[17411]: I0223 13:17:16.124605 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/0.log" Feb 23 13:17:16.124661 master-0 kubenswrapper[17411]: I0223 13:17:16.124647 17411 generic.go:334] "Generic (PLEG): container finished" podID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerID="65e0de263e78124444892833f2525946dbed9f25e7ce79c55fa4beeeeb5154ec" exitCode=255 Feb 23 13:17:16.124773 master-0 kubenswrapper[17411]: I0223 13:17:16.124723 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-4pmtg" Feb 23 13:17:16.125742 master-0 kubenswrapper[17411]: I0223 13:17:16.125358 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786c4f4c85-kvlm6" event={"ID":"481f3444-6cc7-4ae0-89cd-64fb776b4bf3","Type":"ContainerDied","Data":"65e0de263e78124444892833f2525946dbed9f25e7ce79c55fa4beeeeb5154ec"} Feb 23 13:17:16.125954 master-0 kubenswrapper[17411]: I0223 13:17:16.125932 17411 scope.go:117] "RemoveContainer" containerID="65e0de263e78124444892833f2525946dbed9f25e7ce79c55fa4beeeeb5154ec" Feb 23 13:17:17.136656 master-0 kubenswrapper[17411]: I0223 13:17:17.136112 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/0.log" Feb 23 13:17:17.136656 master-0 kubenswrapper[17411]: I0223 13:17:17.136315 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786c4f4c85-kvlm6" event={"ID":"481f3444-6cc7-4ae0-89cd-64fb776b4bf3","Type":"ContainerStarted","Data":"5c49e6f1c5c040ede1977b802340fdbe4433a88936e4e64ed0ee86f8be3897c4"} Feb 23 13:17:17.185496 master-0 kubenswrapper[17411]: I0223 13:17:17.185284 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-786c4f4c85-kvlm6" podStartSLOduration=5.08560339 podStartE2EDuration="9.185194742s" podCreationTimestamp="2026-02-23 13:17:08 +0000 UTC" firstStartedPulling="2026-02-23 13:17:11.105477791 +0000 UTC m=+624.532984408" lastFinishedPulling="2026-02-23 13:17:15.205069163 +0000 UTC m=+628.632575760" observedRunningTime="2026-02-23 13:17:17.170998819 +0000 UTC m=+630.598505446" watchObservedRunningTime="2026-02-23 13:17:17.185194742 +0000 UTC m=+630.612701349" Feb 23 13:17:17.232353 master-0 kubenswrapper[17411]: I0223 13:17:17.232296 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:17:17.232820 master-0 kubenswrapper[17411]: E0223 13:17:17.232691 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab5b849-08ac-4144-814f-78cd765574e3" containerName="collect-profiles" Feb 23 13:17:17.232820 master-0 kubenswrapper[17411]: I0223 13:17:17.232707 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab5b849-08ac-4144-814f-78cd765574e3" containerName="collect-profiles" Feb 23 13:17:17.233154 master-0 kubenswrapper[17411]: I0223 13:17:17.233135 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab5b849-08ac-4144-814f-78cd765574e3" containerName="collect-profiles" Feb 23 13:17:17.233821 master-0 kubenswrapper[17411]: I0223 13:17:17.233800 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.238822 master-0 kubenswrapper[17411]: I0223 13:17:17.238786 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-t58wm" Feb 23 13:17:17.239027 master-0 kubenswrapper[17411]: I0223 13:17:17.239010 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:17:17.249455 master-0 kubenswrapper[17411]: I0223 13:17:17.249346 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:17:17.379786 master-0 kubenswrapper[17411]: I0223 13:17:17.379711 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.380031 master-0 kubenswrapper[17411]: I0223 13:17:17.379981 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.380149 master-0 kubenswrapper[17411]: I0223 13:17:17.380118 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.481806 master-0 kubenswrapper[17411]: I0223 13:17:17.481746 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.482048 master-0 kubenswrapper[17411]: I0223 13:17:17.481928 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.482048 master-0 kubenswrapper[17411]: I0223 13:17:17.482001 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.482176 master-0 kubenswrapper[17411]: I0223 13:17:17.482132 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.482323 master-0 kubenswrapper[17411]: I0223 13:17:17.482274 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.497060 master-0 kubenswrapper[17411]: I0223 13:17:17.497022 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.561208 master-0 kubenswrapper[17411]: I0223 13:17:17.561147 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:17.985636 master-0 kubenswrapper[17411]: I0223 13:17:17.985544 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 23 13:17:17.986837 master-0 kubenswrapper[17411]: W0223 13:17:17.986785 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf2fae305_15e6_407f_b4da_ee80c73ac312.slice/crio-485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7 WatchSource:0}: Error finding container 485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7: Status 404 returned error can't find the container with id 485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7 Feb 23 13:17:18.145396 master-0 kubenswrapper[17411]: I0223 13:17:18.145333 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"f2fae305-15e6-407f-b4da-ee80c73ac312","Type":"ContainerStarted","Data":"485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7"} Feb 23 13:17:18.869612 master-0 kubenswrapper[17411]: I0223 13:17:18.869554 17411 scope.go:117] "RemoveContainer" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" Feb 23 13:17:18.870027 master-0 kubenswrapper[17411]: E0223 13:17:18.869849 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-8mxs2_openshift-machine-api(16898873-740b-4b85-99cf-d25a28d4ab00)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" podUID="16898873-740b-4b85-99cf-d25a28d4ab00" Feb 23 13:17:19.155342 master-0 kubenswrapper[17411]: I0223 13:17:19.155171 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"f2fae305-15e6-407f-b4da-ee80c73ac312","Type":"ContainerStarted","Data":"13ab19f676af14275b79109cb76031fa7d7a3f803bdc414c94048fd8521e0f31"} Feb 23 13:17:19.177177 master-0 kubenswrapper[17411]: I0223 13:17:19.177062 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.177037433 podStartE2EDuration="2.177037433s" podCreationTimestamp="2026-02-23 13:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:19.173950326 +0000 UTC m=+632.601456963" watchObservedRunningTime="2026-02-23 13:17:19.177037433 +0000 UTC m=+632.604544030" Feb 23 13:17:19.457187 master-0 kubenswrapper[17411]: I0223 13:17:19.457090 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:17:19.458782 master-0 kubenswrapper[17411]: I0223 13:17:19.458734 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.479229 master-0 kubenswrapper[17411]: I0223 13:17:19.479188 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:17:19.630818 master-0 kubenswrapper[17411]: I0223 13:17:19.630682 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.631060 master-0 kubenswrapper[17411]: I0223 13:17:19.630920 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.631223 master-0 kubenswrapper[17411]: I0223 13:17:19.631089 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.631931 master-0 kubenswrapper[17411]: I0223 13:17:19.631869 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pk25\" (UniqueName: \"kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.632043 master-0 kubenswrapper[17411]: I0223 13:17:19.631991 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.632724 master-0 kubenswrapper[17411]: I0223 13:17:19.632674 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.735585 master-0 kubenswrapper[17411]: I0223 13:17:19.735466 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.735770 master-0 kubenswrapper[17411]: I0223 13:17:19.735601 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.735884 master-0 kubenswrapper[17411]: I0223 13:17:19.735864 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.736047 master-0 kubenswrapper[17411]: I0223 13:17:19.736025 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.736146 master-0 kubenswrapper[17411]: I0223 13:17:19.736131 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.736300 master-0 kubenswrapper[17411]: I0223 13:17:19.736191 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pk25\" (UniqueName: \"kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.736870 master-0 kubenswrapper[17411]: I0223 13:17:19.736803 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.737017 master-0 kubenswrapper[17411]: I0223 13:17:19.736874 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.737461 master-0 kubenswrapper[17411]: I0223 13:17:19.737430 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.741429 master-0 kubenswrapper[17411]: I0223 13:17:19.741344 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.742030 master-0 kubenswrapper[17411]: I0223 13:17:19.742006 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.766878 master-0 kubenswrapper[17411]: I0223 13:17:19.766808 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pk25\" (UniqueName: \"kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25\") pod \"console-55fc6cb76d-9jsfs\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:19.783073 master-0 kubenswrapper[17411]: I0223 13:17:19.782958 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:20.303285 master-0 kubenswrapper[17411]: I0223 13:17:20.303128 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:17:20.310207 master-0 kubenswrapper[17411]: W0223 13:17:20.310153 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1e79bb_bc6b_4cd8_9988_0adf5b658b80.slice/crio-ec04ba18bb3cf99facf201115b7affcd132dcb1c2d2593882ffbbfb3700d60ce WatchSource:0}: Error finding container ec04ba18bb3cf99facf201115b7affcd132dcb1c2d2593882ffbbfb3700d60ce: Status 404 returned error can't find the container with id ec04ba18bb3cf99facf201115b7affcd132dcb1c2d2593882ffbbfb3700d60ce Feb 23 13:17:20.645144 master-0 kubenswrapper[17411]: I0223 13:17:20.645013 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:20.645144 master-0 kubenswrapper[17411]: I0223 13:17:20.645082 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:20.651481 master-0 kubenswrapper[17411]: I0223 13:17:20.651416 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:20.869438 master-0 kubenswrapper[17411]: I0223 13:17:20.869348 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:17:20.869689 master-0 kubenswrapper[17411]: E0223 13:17:20.869624 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:17:21.181664 master-0 kubenswrapper[17411]: I0223 13:17:21.181603 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fc6cb76d-9jsfs" event={"ID":"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80","Type":"ContainerStarted","Data":"b76c9abf714dbf7f3c22da2e43433195586724aa73047a6fbf53b302a613afdd"} Feb 23 13:17:21.181882 master-0 kubenswrapper[17411]: I0223 13:17:21.181677 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fc6cb76d-9jsfs" event={"ID":"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80","Type":"ContainerStarted","Data":"ec04ba18bb3cf99facf201115b7affcd132dcb1c2d2593882ffbbfb3700d60ce"} Feb 23 13:17:21.188340 master-0 kubenswrapper[17411]: I0223 13:17:21.188294 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:21.211861 master-0 kubenswrapper[17411]: I0223 13:17:21.211752 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55fc6cb76d-9jsfs" podStartSLOduration=2.21172191 podStartE2EDuration="2.21172191s" podCreationTimestamp="2026-02-23 13:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:17:21.203693062 +0000 UTC m=+634.631199709" watchObservedRunningTime="2026-02-23 13:17:21.21172191 +0000 UTC m=+634.639228537" Feb 23 13:17:27.868676 master-0 kubenswrapper[17411]: I0223 13:17:27.868610 17411 scope.go:117] "RemoveContainer" containerID="edc1773c982d6063298896af34c17dae7d495b67e0652db28d6d5baf5d894ae5" Feb 23 13:17:29.783888 master-0 kubenswrapper[17411]: I0223 13:17:29.783708 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:29.783888 master-0 kubenswrapper[17411]: I0223 13:17:29.783779 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:29.788801 master-0 kubenswrapper[17411]: I0223 13:17:29.788749 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:30.270467 master-0 kubenswrapper[17411]: I0223 13:17:30.270389 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:17:30.342951 master-0 kubenswrapper[17411]: I0223 13:17:30.342905 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:31.869641 master-0 kubenswrapper[17411]: I0223 13:17:31.869579 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:17:31.870234 master-0 kubenswrapper[17411]: E0223 13:17:31.869978 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:17:32.868885 master-0 kubenswrapper[17411]: I0223 13:17:32.868640 17411 scope.go:117] "RemoveContainer" containerID="72600f7ac1b92f01197c56d298715777572c9e118234eed615d6c2923db72d7a" Feb 23 13:17:39.348656 master-0 kubenswrapper[17411]: I0223 13:17:39.348541 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-6llwl_a3dfb271-a659-45e0-b51d-5e99ec43b555/cluster-node-tuning-operator/1.log" Feb 23 13:17:39.349255 master-0 kubenswrapper[17411]: I0223 13:17:39.348785 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-6llwl" event={"ID":"a3dfb271-a659-45e0-b51d-5e99ec43b555","Type":"ContainerStarted","Data":"54f45b4c184db650b80d08d79762602069e81df3b06bdf141a3f9a9ba83bbc0a"} Feb 23 13:17:39.356702 master-0 kubenswrapper[17411]: I0223 13:17:39.354511 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-8mxs2_16898873-740b-4b85-99cf-d25a28d4ab00/cluster-baremetal-operator/5.log" Feb 23 13:17:39.356702 master-0 kubenswrapper[17411]: I0223 13:17:39.355000 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-8mxs2" event={"ID":"16898873-740b-4b85-99cf-d25a28d4ab00","Type":"ContainerStarted","Data":"7462ed5cbf0c55f733ea1c55ca50096fd024756777367440ad71152a9b0e1cd0"} Feb 23 13:17:39.368688 master-0 kubenswrapper[17411]: I0223 13:17:39.359194 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-pdh8w" event={"ID":"736dee32-e1e3-4ba4-b0c5-cf54b2af94b1","Type":"ContainerStarted","Data":"8de3d11e598f90eb34e200960924cd4e0a9c255d017015dee69a2d04dba0f04b"} Feb 23 13:17:39.368688 master-0 kubenswrapper[17411]: I0223 13:17:39.360170 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:39.368688 master-0 kubenswrapper[17411]: I0223 13:17:39.360855 17411 patch_prober.go:28] interesting pod/downloads-955b69498-pdh8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 23 13:17:39.368688 master-0 kubenswrapper[17411]: I0223 13:17:39.360899 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-pdh8w" podUID="736dee32-e1e3-4ba4-b0c5-cf54b2af94b1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 23 13:17:39.429028 master-0 kubenswrapper[17411]: I0223 13:17:39.418854 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-pdh8w" podStartSLOduration=2.868781706 podStartE2EDuration="37.418824562s" podCreationTimestamp="2026-02-23 13:17:02 +0000 UTC" firstStartedPulling="2026-02-23 13:17:04.355421506 +0000 UTC m=+617.782928103" lastFinishedPulling="2026-02-23 13:17:38.905464322 +0000 UTC m=+652.332970959" observedRunningTime="2026-02-23 13:17:39.407100139 +0000 UTC m=+652.834606736" watchObservedRunningTime="2026-02-23 13:17:39.418824562 +0000 UTC m=+652.846331169" Feb 23 13:17:40.367842 master-0 kubenswrapper[17411]: I0223 13:17:40.367746 17411 patch_prober.go:28] interesting pod/downloads-955b69498-pdh8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 23 13:17:40.368507 master-0 kubenswrapper[17411]: I0223 13:17:40.367847 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-pdh8w" podUID="736dee32-e1e3-4ba4-b0c5-cf54b2af94b1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 23 13:17:41.375161 master-0 kubenswrapper[17411]: I0223 13:17:41.375090 17411 patch_prober.go:28] interesting pod/downloads-955b69498-pdh8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 23 13:17:41.375859 master-0 kubenswrapper[17411]: I0223 13:17:41.375183 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-pdh8w" podUID="736dee32-e1e3-4ba4-b0c5-cf54b2af94b1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 23 13:17:43.690161 master-0 kubenswrapper[17411]: I0223 13:17:43.690084 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-pdh8w" Feb 23 13:17:44.876469 master-0 kubenswrapper[17411]: I0223 13:17:44.869476 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:17:44.876469 master-0 kubenswrapper[17411]: E0223 13:17:44.870001 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:17:51.129841 master-0 kubenswrapper[17411]: I0223 13:17:51.129745 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:17:51.131116 master-0 kubenswrapper[17411]: I0223 13:17:51.130104 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ea9aa7893884286b0f9dd2cc94d3dc00f41c3846f07eae1cc605631dd0fe37bc" gracePeriod=30 Feb 23 13:17:51.131116 master-0 kubenswrapper[17411]: I0223 13:17:51.130166 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" containerID="cri-o://42cdeb8b7eb8c28b7cf71798320b73487eab2a374dc84ef2d6218c3ff6c02e03" gracePeriod=30 Feb 23 13:17:51.131116 master-0 kubenswrapper[17411]: I0223 13:17:51.130270 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" containerID="cri-o://6038b47b20500295b07b50ea89a301874d951b7f4b3a978dab3e4e44820c0ac7" gracePeriod=30 Feb 23 13:17:51.131116 master-0 kubenswrapper[17411]: I0223 13:17:51.130222 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1b5f99f63dd002feaf41abedc78477cbb67500c7fee6071e3fdb7a32dbad49a8" gracePeriod=30 Feb 23 13:17:51.217610 master-0 kubenswrapper[17411]: I0223 13:17:51.217486 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:17:51.217975 master-0 kubenswrapper[17411]: E0223 13:17:51.217950 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.217975 master-0 kubenswrapper[17411]: I0223 13:17:51.217975 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: E0223 13:17:51.217995 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: I0223 13:17:51.218009 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: E0223 13:17:51.218044 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: I0223 13:17:51.218055 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: E0223 13:17:51.218085 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: I0223 13:17:51.218097 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: E0223 13:17:51.218125 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: I0223 13:17:51.218137 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218146 master-0 kubenswrapper[17411]: E0223 13:17:51.218157 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-cert-syncer" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218171 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-cert-syncer" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: E0223 13:17:51.218190 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218202 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: E0223 13:17:51.218226 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218237 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: E0223 13:17:51.218283 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218297 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218546 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218567 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218591 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218613 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-cert-syncer" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218647 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218674 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.218719 master-0 kubenswrapper[17411]: I0223 13:17:51.218690 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.219689 master-0 kubenswrapper[17411]: E0223 13:17:51.218915 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.219689 master-0 kubenswrapper[17411]: I0223 13:17:51.218933 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.219689 master-0 kubenswrapper[17411]: I0223 13:17:51.219171 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager-recovery-controller" Feb 23 13:17:51.219689 master-0 kubenswrapper[17411]: I0223 13:17:51.219221 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="kube-controller-manager" Feb 23 13:17:51.219689 master-0 kubenswrapper[17411]: I0223 13:17:51.219279 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b7ce474df02ea287eb02ea513a627a" containerName="cluster-policy-controller" Feb 23 13:17:51.321934 master-0 kubenswrapper[17411]: I0223 13:17:51.321845 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.322614 master-0 kubenswrapper[17411]: I0223 13:17:51.322192 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.424533 master-0 kubenswrapper[17411]: I0223 13:17:51.424447 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.424786 master-0 kubenswrapper[17411]: I0223 13:17:51.424616 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.424786 master-0 kubenswrapper[17411]: I0223 13:17:51.424660 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.424786 master-0 kubenswrapper[17411]: I0223 13:17:51.424608 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/301dfa6ea9397ba2f06b3a202daef281-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"301dfa6ea9397ba2f06b3a202daef281\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.501769 master-0 kubenswrapper[17411]: I0223 13:17:51.501573 17411 generic.go:334] "Generic (PLEG): container finished" podID="f2fae305-15e6-407f-b4da-ee80c73ac312" containerID="13ab19f676af14275b79109cb76031fa7d7a3f803bdc414c94048fd8521e0f31" exitCode=0 Feb 23 13:17:51.501769 master-0 kubenswrapper[17411]: I0223 13:17:51.501664 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"f2fae305-15e6-407f-b4da-ee80c73ac312","Type":"ContainerDied","Data":"13ab19f676af14275b79109cb76031fa7d7a3f803bdc414c94048fd8521e0f31"} Feb 23 13:17:51.505810 master-0 kubenswrapper[17411]: I0223 13:17:51.505763 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/cluster-policy-controller/3.log" Feb 23 13:17:51.507564 master-0 kubenswrapper[17411]: I0223 13:17:51.507488 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="38b7ce474df02ea287eb02ea513a627a" podUID="301dfa6ea9397ba2f06b3a202daef281" Feb 23 13:17:51.508281 master-0 kubenswrapper[17411]: I0223 13:17:51.508194 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager-cert-syncer/0.log" Feb 23 13:17:51.510023 master-0 kubenswrapper[17411]: I0223 13:17:51.509982 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager/0.log" Feb 23 13:17:51.510128 master-0 kubenswrapper[17411]: I0223 13:17:51.510077 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="1b5f99f63dd002feaf41abedc78477cbb67500c7fee6071e3fdb7a32dbad49a8" exitCode=0 Feb 23 13:17:51.510128 master-0 kubenswrapper[17411]: I0223 13:17:51.510112 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="42cdeb8b7eb8c28b7cf71798320b73487eab2a374dc84ef2d6218c3ff6c02e03" exitCode=0 Feb 23 13:17:51.510271 master-0 kubenswrapper[17411]: I0223 13:17:51.510128 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="6038b47b20500295b07b50ea89a301874d951b7f4b3a978dab3e4e44820c0ac7" exitCode=0 Feb 23 13:17:51.510271 master-0 kubenswrapper[17411]: I0223 13:17:51.510146 17411 generic.go:334] "Generic (PLEG): container finished" podID="38b7ce474df02ea287eb02ea513a627a" containerID="ea9aa7893884286b0f9dd2cc94d3dc00f41c3846f07eae1cc605631dd0fe37bc" exitCode=2 Feb 23 13:17:51.510394 master-0 kubenswrapper[17411]: I0223 13:17:51.510282 17411 scope.go:117] "RemoveContainer" containerID="7be9444f5b625e402453341f193b326bd7008df65bbec6d9b42b674fec217d14" Feb 23 13:17:51.572051 master-0 kubenswrapper[17411]: I0223 13:17:51.571988 17411 scope.go:117] "RemoveContainer" containerID="e4663029bff942030b264b346e82302527310fa787735f4248a285d5679c54dc" Feb 23 13:17:51.593309 master-0 kubenswrapper[17411]: I0223 13:17:51.593231 17411 scope.go:117] "RemoveContainer" containerID="a6bd5c98100900ff484d9ecc07c3575ef2dfde242a0ba0ee9c6ef45ff1a27bdb" Feb 23 13:17:51.812418 master-0 kubenswrapper[17411]: I0223 13:17:51.812240 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager-cert-syncer/0.log" Feb 23 13:17:51.812729 master-0 kubenswrapper[17411]: I0223 13:17:51.812456 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:51.817363 master-0 kubenswrapper[17411]: I0223 13:17:51.817308 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="38b7ce474df02ea287eb02ea513a627a" podUID="301dfa6ea9397ba2f06b3a202daef281" Feb 23 13:17:51.933946 master-0 kubenswrapper[17411]: I0223 13:17:51.933745 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir\") pod \"38b7ce474df02ea287eb02ea513a627a\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " Feb 23 13:17:51.933946 master-0 kubenswrapper[17411]: I0223 13:17:51.933879 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir\") pod \"38b7ce474df02ea287eb02ea513a627a\" (UID: \"38b7ce474df02ea287eb02ea513a627a\") " Feb 23 13:17:51.934322 master-0 kubenswrapper[17411]: I0223 13:17:51.933949 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "38b7ce474df02ea287eb02ea513a627a" (UID: "38b7ce474df02ea287eb02ea513a627a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:51.934322 master-0 kubenswrapper[17411]: I0223 13:17:51.934029 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "38b7ce474df02ea287eb02ea513a627a" (UID: "38b7ce474df02ea287eb02ea513a627a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:51.934717 master-0 kubenswrapper[17411]: I0223 13:17:51.934659 17411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:51.934788 master-0 kubenswrapper[17411]: I0223 13:17:51.934719 17411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/38b7ce474df02ea287eb02ea513a627a-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:52.525856 master-0 kubenswrapper[17411]: I0223 13:17:52.525794 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_38b7ce474df02ea287eb02ea513a627a/kube-controller-manager-cert-syncer/0.log" Feb 23 13:17:52.526824 master-0 kubenswrapper[17411]: I0223 13:17:52.525987 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ac8d380846375eb3a978d2f0a3e4d03963a17496bbb3d9d032fb2bdb89ef50" Feb 23 13:17:52.526824 master-0 kubenswrapper[17411]: I0223 13:17:52.526017 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:17:52.532401 master-0 kubenswrapper[17411]: I0223 13:17:52.532307 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="38b7ce474df02ea287eb02ea513a627a" podUID="301dfa6ea9397ba2f06b3a202daef281" Feb 23 13:17:52.572019 master-0 kubenswrapper[17411]: I0223 13:17:52.571951 17411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="38b7ce474df02ea287eb02ea513a627a" podUID="301dfa6ea9397ba2f06b3a202daef281" Feb 23 13:17:52.879076 master-0 kubenswrapper[17411]: I0223 13:17:52.878966 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b7ce474df02ea287eb02ea513a627a" path="/var/lib/kubelet/pods/38b7ce474df02ea287eb02ea513a627a/volumes" Feb 23 13:17:53.101452 master-0 kubenswrapper[17411]: I0223 13:17:53.101362 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:53.259409 master-0 kubenswrapper[17411]: I0223 13:17:53.259350 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock\") pod \"f2fae305-15e6-407f-b4da-ee80c73ac312\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " Feb 23 13:17:53.259862 master-0 kubenswrapper[17411]: I0223 13:17:53.259502 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock" (OuterVolumeSpecName: "var-lock") pod "f2fae305-15e6-407f-b4da-ee80c73ac312" (UID: "f2fae305-15e6-407f-b4da-ee80c73ac312"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:53.260105 master-0 kubenswrapper[17411]: I0223 13:17:53.260084 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir\") pod \"f2fae305-15e6-407f-b4da-ee80c73ac312\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " Feb 23 13:17:53.260270 master-0 kubenswrapper[17411]: I0223 13:17:53.260199 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2fae305-15e6-407f-b4da-ee80c73ac312" (UID: "f2fae305-15e6-407f-b4da-ee80c73ac312"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:17:53.260375 master-0 kubenswrapper[17411]: I0223 13:17:53.260354 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access\") pod \"f2fae305-15e6-407f-b4da-ee80c73ac312\" (UID: \"f2fae305-15e6-407f-b4da-ee80c73ac312\") " Feb 23 13:17:53.260860 master-0 kubenswrapper[17411]: I0223 13:17:53.260837 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:53.260978 master-0 kubenswrapper[17411]: I0223 13:17:53.260957 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2fae305-15e6-407f-b4da-ee80c73ac312-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:53.265218 master-0 kubenswrapper[17411]: I0223 13:17:53.265161 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2fae305-15e6-407f-b4da-ee80c73ac312" (UID: "f2fae305-15e6-407f-b4da-ee80c73ac312"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:53.363376 master-0 kubenswrapper[17411]: I0223 13:17:53.363301 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2fae305-15e6-407f-b4da-ee80c73ac312-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:53.537384 master-0 kubenswrapper[17411]: I0223 13:17:53.537164 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"f2fae305-15e6-407f-b4da-ee80c73ac312","Type":"ContainerDied","Data":"485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7"} Feb 23 13:17:53.537384 master-0 kubenswrapper[17411]: I0223 13:17:53.537256 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485f393b6a5609b601453927170db63a9f792cb1674b891a04c73541fb6dc0b7" Feb 23 13:17:53.537384 master-0 kubenswrapper[17411]: I0223 13:17:53.537305 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 23 13:17:55.396840 master-0 kubenswrapper[17411]: I0223 13:17:55.396751 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-786c4f4c85-kvlm6" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" containerID="cri-o://5c49e6f1c5c040ede1977b802340fdbe4433a88936e4e64ed0ee86f8be3897c4" gracePeriod=15 Feb 23 13:17:55.557676 master-0 kubenswrapper[17411]: I0223 13:17:55.557612 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/1.log" Feb 23 13:17:55.558421 master-0 kubenswrapper[17411]: I0223 13:17:55.558391 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/0.log" Feb 23 13:17:55.558498 master-0 kubenswrapper[17411]: I0223 13:17:55.558466 17411 generic.go:334] "Generic (PLEG): container finished" podID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerID="5c49e6f1c5c040ede1977b802340fdbe4433a88936e4e64ed0ee86f8be3897c4" exitCode=2 Feb 23 13:17:55.558547 master-0 kubenswrapper[17411]: I0223 13:17:55.558502 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786c4f4c85-kvlm6" event={"ID":"481f3444-6cc7-4ae0-89cd-64fb776b4bf3","Type":"ContainerDied","Data":"5c49e6f1c5c040ede1977b802340fdbe4433a88936e4e64ed0ee86f8be3897c4"} Feb 23 13:17:55.558589 master-0 kubenswrapper[17411]: I0223 13:17:55.558572 17411 scope.go:117] "RemoveContainer" containerID="65e0de263e78124444892833f2525946dbed9f25e7ce79c55fa4beeeeb5154ec" Feb 23 13:17:56.494931 master-0 kubenswrapper[17411]: I0223 13:17:56.494880 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/1.log" Feb 23 13:17:56.495441 master-0 kubenswrapper[17411]: I0223 13:17:56.494994 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:56.566484 master-0 kubenswrapper[17411]: I0223 13:17:56.566419 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-786c4f4c85-kvlm6_481f3444-6cc7-4ae0-89cd-64fb776b4bf3/console/1.log" Feb 23 13:17:56.566766 master-0 kubenswrapper[17411]: I0223 13:17:56.566511 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786c4f4c85-kvlm6" event={"ID":"481f3444-6cc7-4ae0-89cd-64fb776b4bf3","Type":"ContainerDied","Data":"ca5a21740beffc32a56e62658152d32adb3fd0ae26afed4a705b227dcfbd1d31"} Feb 23 13:17:56.566766 master-0 kubenswrapper[17411]: I0223 13:17:56.566553 17411 scope.go:117] "RemoveContainer" containerID="5c49e6f1c5c040ede1977b802340fdbe4433a88936e4e64ed0ee86f8be3897c4" Feb 23 13:17:56.566766 master-0 kubenswrapper[17411]: I0223 13:17:56.566661 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786c4f4c85-kvlm6" Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.622797 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfznz\" (UniqueName: \"kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.622852 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.623100 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.623145 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.623175 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.623071 master-0 kubenswrapper[17411]: I0223 13:17:56.623194 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config\") pod \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\" (UID: \"481f3444-6cc7-4ae0-89cd-64fb776b4bf3\") " Feb 23 13:17:56.624411 master-0 kubenswrapper[17411]: I0223 13:17:56.624344 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:56.624598 master-0 kubenswrapper[17411]: I0223 13:17:56.624364 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config" (OuterVolumeSpecName: "console-config") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:56.624993 master-0 kubenswrapper[17411]: I0223 13:17:56.624906 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca" (OuterVolumeSpecName: "service-ca") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:17:56.626997 master-0 kubenswrapper[17411]: I0223 13:17:56.626921 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:56.627277 master-0 kubenswrapper[17411]: I0223 13:17:56.627209 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:17:56.627348 master-0 kubenswrapper[17411]: I0223 13:17:56.627308 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz" (OuterVolumeSpecName: "kube-api-access-sfznz") pod "481f3444-6cc7-4ae0-89cd-64fb776b4bf3" (UID: "481f3444-6cc7-4ae0-89cd-64fb776b4bf3"). InnerVolumeSpecName "kube-api-access-sfznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726140 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726195 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726212 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726230 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726269 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfznz\" (UniqueName: \"kubernetes.io/projected/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-kube-api-access-sfznz\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:56.726282 master-0 kubenswrapper[17411]: I0223 13:17:56.726282 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/481f3444-6cc7-4ae0-89cd-64fb776b4bf3-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:17:57.868032 master-0 kubenswrapper[17411]: I0223 13:17:57.867963 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:17:57.868790 master-0 kubenswrapper[17411]: E0223 13:17:57.868184 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:17:58.026291 master-0 kubenswrapper[17411]: I0223 13:17:58.026175 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:58.194067 master-0 kubenswrapper[17411]: I0223 13:17:58.193963 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-786c4f4c85-kvlm6"] Feb 23 13:17:58.878325 master-0 kubenswrapper[17411]: I0223 13:17:58.878198 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" path="/var/lib/kubelet/pods/481f3444-6cc7-4ae0-89cd-64fb776b4bf3/volumes" Feb 23 13:18:06.868493 master-0 kubenswrapper[17411]: I0223 13:18:06.868402 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:06.903165 master-0 kubenswrapper[17411]: I0223 13:18:06.903085 17411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2bbabd65-fd90-468d-9231-49cbef259f2d" Feb 23 13:18:06.903165 master-0 kubenswrapper[17411]: I0223 13:18:06.903142 17411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2bbabd65-fd90-468d-9231-49cbef259f2d" Feb 23 13:18:06.930322 master-0 kubenswrapper[17411]: I0223 13:18:06.925507 17411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:06.930995 master-0 kubenswrapper[17411]: I0223 13:18:06.930923 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:18:06.941706 master-0 kubenswrapper[17411]: I0223 13:18:06.941661 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:06.941929 master-0 kubenswrapper[17411]: I0223 13:18:06.941704 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:18:06.951487 master-0 kubenswrapper[17411]: I0223 13:18:06.951426 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 23 13:18:06.982399 master-0 kubenswrapper[17411]: W0223 13:18:06.982225 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301dfa6ea9397ba2f06b3a202daef281.slice/crio-0a435e92fda23d333935aa1f839f4913bdbbe413cb7250dd0c0aa8479196ae22 WatchSource:0}: Error finding container 0a435e92fda23d333935aa1f839f4913bdbbe413cb7250dd0c0aa8479196ae22: Status 404 returned error can't find the container with id 0a435e92fda23d333935aa1f839f4913bdbbe413cb7250dd0c0aa8479196ae22 Feb 23 13:18:07.675366 master-0 kubenswrapper[17411]: I0223 13:18:07.675273 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"0845ba26114427c2431d4104e6c6a3974a2fbb4aab9f0fa109c4d7eb0058813e"} Feb 23 13:18:07.675366 master-0 kubenswrapper[17411]: I0223 13:18:07.675365 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"0a435e92fda23d333935aa1f839f4913bdbbe413cb7250dd0c0aa8479196ae22"} Feb 23 13:18:08.690782 master-0 kubenswrapper[17411]: I0223 13:18:08.690652 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"1ec62929e57f1c9c05b0929410c361de8a36b5f4da8091f97e884febe0c012c1"} Feb 23 13:18:08.690782 master-0 kubenswrapper[17411]: I0223 13:18:08.690735 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"992400789bf6890a9403b444a5d4b5caaa60b975668c3a0f6d393a5fcff384b5"} Feb 23 13:18:08.690782 master-0 kubenswrapper[17411]: I0223 13:18:08.690759 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"81925d7e192eb7204010b9195257ccb5139e09fceaf30f8b86734fa7097d74d6"} Feb 23 13:18:08.743364 master-0 kubenswrapper[17411]: I0223 13:18:08.743200 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.7431678 podStartE2EDuration="2.7431678s" podCreationTimestamp="2026-02-23 13:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:18:08.728620466 +0000 UTC m=+682.156127153" watchObservedRunningTime="2026-02-23 13:18:08.7431678 +0000 UTC m=+682.170674427" Feb 23 13:18:12.869425 master-0 kubenswrapper[17411]: I0223 13:18:12.869340 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:18:12.870351 master-0 kubenswrapper[17411]: E0223 13:18:12.869643 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-hgkrm_openshift-cluster-storage-operator(4e6bc033-cd90-4704-b03a-8e9c6c0d3904)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" podUID="4e6bc033-cd90-4704-b03a-8e9c6c0d3904" Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.943534 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.943631 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="301dfa6ea9397ba2f06b3a202daef281" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.943566 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.943983 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.944000 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:16.944601 master-0 kubenswrapper[17411]: I0223 13:18:16.944011 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:16.950196 master-0 kubenswrapper[17411]: I0223 13:18:16.949991 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:17.792044 master-0 kubenswrapper[17411]: I0223 13:18:17.791958 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:26.884422 master-0 kubenswrapper[17411]: I0223 13:18:26.884333 17411 scope.go:117] "RemoveContainer" containerID="892ee3d3d4ab37828bb86ecb5889d534ad99fa7426d85a6aac6b88ecafe366b8" Feb 23 13:18:26.942603 master-0 kubenswrapper[17411]: I0223 13:18:26.942533 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:18:26.942889 master-0 kubenswrapper[17411]: I0223 13:18:26.942616 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="301dfa6ea9397ba2f06b3a202daef281" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:18:27.903018 master-0 kubenswrapper[17411]: I0223 13:18:27.902956 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-hgkrm_4e6bc033-cd90-4704-b03a-8e9c6c0d3904/snapshot-controller/6.log" Feb 23 13:18:27.903698 master-0 kubenswrapper[17411]: I0223 13:18:27.903026 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-hgkrm" event={"ID":"4e6bc033-cd90-4704-b03a-8e9c6c0d3904","Type":"ContainerStarted","Data":"cfb272c3517924b3045dff8e445c5f9ac6149cfca16b48117015b8f9f0ca8e44"} Feb 23 13:18:36.943234 master-0 kubenswrapper[17411]: I0223 13:18:36.943146 17411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 23 13:18:36.944096 master-0 kubenswrapper[17411]: I0223 13:18:36.943287 17411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="301dfa6ea9397ba2f06b3a202daef281" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 23 13:18:36.944096 master-0 kubenswrapper[17411]: I0223 13:18:36.943361 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:18:36.944282 master-0 kubenswrapper[17411]: I0223 13:18:36.944202 17411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"0845ba26114427c2431d4104e6c6a3974a2fbb4aab9f0fa109c4d7eb0058813e"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 23 13:18:36.944460 master-0 kubenswrapper[17411]: I0223 13:18:36.944399 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="301dfa6ea9397ba2f06b3a202daef281" containerName="kube-controller-manager" containerID="cri-o://0845ba26114427c2431d4104e6c6a3974a2fbb4aab9f0fa109c4d7eb0058813e" gracePeriod=30 Feb 23 13:18:43.482493 master-0 kubenswrapper[17411]: I0223 13:18:43.482417 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: E0223 13:18:43.482868 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2fae305-15e6-407f-b4da-ee80c73ac312" containerName="installer" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.482890 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2fae305-15e6-407f-b4da-ee80c73ac312" containerName="installer" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: E0223 13:18:43.482917 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.482927 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.483209 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2fae305-15e6-407f-b4da-ee80c73ac312" containerName="installer" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.483266 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.483281 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:18:43.486106 master-0 kubenswrapper[17411]: I0223 13:18:43.484072 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.487893 master-0 kubenswrapper[17411]: I0223 13:18:43.487832 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-fk29t" Feb 23 13:18:43.488776 master-0 kubenswrapper[17411]: I0223 13:18:43.488711 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 23 13:18:43.497349 master-0 kubenswrapper[17411]: I0223 13:18:43.497231 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 23 13:18:43.586576 master-0 kubenswrapper[17411]: I0223 13:18:43.586471 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.586814 master-0 kubenswrapper[17411]: I0223 13:18:43.586691 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.586814 master-0 kubenswrapper[17411]: I0223 13:18:43.586811 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.688665 master-0 kubenswrapper[17411]: I0223 13:18:43.688581 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.688665 master-0 kubenswrapper[17411]: I0223 13:18:43.688673 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.688941 master-0 kubenswrapper[17411]: I0223 13:18:43.688770 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.688941 master-0 kubenswrapper[17411]: I0223 13:18:43.688912 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.689887 master-0 kubenswrapper[17411]: I0223 13:18:43.689836 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.706807 master-0 kubenswrapper[17411]: I0223 13:18:43.706747 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:43.812349 master-0 kubenswrapper[17411]: I0223 13:18:43.812064 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:18:44.327941 master-0 kubenswrapper[17411]: I0223 13:18:44.327858 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 23 13:18:45.093949 master-0 kubenswrapper[17411]: I0223 13:18:45.093692 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6b09bcbe-cbfb-4348-9dc7-74508f7cd592","Type":"ContainerStarted","Data":"bb06bc0bbb808218af72312a94b42989998d5e23102e6dcb8788f173329fca28"} Feb 23 13:18:45.094880 master-0 kubenswrapper[17411]: I0223 13:18:45.093811 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6b09bcbe-cbfb-4348-9dc7-74508f7cd592","Type":"ContainerStarted","Data":"d2c55908af843df1acc389dd1bc4b370ce1f793c1c1d18f791d9defa3f39b8f0"} Feb 23 13:18:45.136569 master-0 kubenswrapper[17411]: I0223 13:18:45.129952 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=2.129916641 podStartE2EDuration="2.129916641s" podCreationTimestamp="2026-02-23 13:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:18:45.125942378 +0000 UTC m=+718.553449015" watchObservedRunningTime="2026-02-23 13:18:45.129916641 +0000 UTC m=+718.557423338" Feb 23 13:18:47.376417 master-0 kubenswrapper[17411]: I0223 13:18:47.376349 17411 scope.go:117] "RemoveContainer" containerID="6038b47b20500295b07b50ea89a301874d951b7f4b3a978dab3e4e44820c0ac7" Feb 23 13:18:47.399753 master-0 kubenswrapper[17411]: I0223 13:18:47.399702 17411 scope.go:117] "RemoveContainer" containerID="ea9aa7893884286b0f9dd2cc94d3dc00f41c3846f07eae1cc605631dd0fe37bc" Feb 23 13:19:07.333290 master-0 kubenswrapper[17411]: I0223 13:19:07.333151 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_301dfa6ea9397ba2f06b3a202daef281/kube-controller-manager/0.log" Feb 23 13:19:07.334576 master-0 kubenswrapper[17411]: I0223 13:19:07.333293 17411 generic.go:334] "Generic (PLEG): container finished" podID="301dfa6ea9397ba2f06b3a202daef281" containerID="0845ba26114427c2431d4104e6c6a3974a2fbb4aab9f0fa109c4d7eb0058813e" exitCode=137 Feb 23 13:19:07.334576 master-0 kubenswrapper[17411]: I0223 13:19:07.333752 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerDied","Data":"0845ba26114427c2431d4104e6c6a3974a2fbb4aab9f0fa109c4d7eb0058813e"} Feb 23 13:19:08.359510 master-0 kubenswrapper[17411]: I0223 13:19:08.359409 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_301dfa6ea9397ba2f06b3a202daef281/kube-controller-manager/0.log" Feb 23 13:19:08.359510 master-0 kubenswrapper[17411]: I0223 13:19:08.359507 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"301dfa6ea9397ba2f06b3a202daef281","Type":"ContainerStarted","Data":"5731d000d615befadddcdcdce0dce9bb43bac7768e5e3a2662acb24b9667b0c2"} Feb 23 13:19:16.518778 master-0 kubenswrapper[17411]: I0223 13:19:16.518655 17411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:19:16.519882 master-0 kubenswrapper[17411]: I0223 13:19:16.519111 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6" gracePeriod=30 Feb 23 13:19:16.520778 master-0 kubenswrapper[17411]: I0223 13:19:16.520710 17411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 23 13:19:16.521225 master-0 kubenswrapper[17411]: E0223 13:19:16.521178 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:19:16.521225 master-0 kubenswrapper[17411]: I0223 13:19:16.521208 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f3444-6cc7-4ae0-89cd-64fb776b4bf3" containerName="console" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: E0223 13:19:16.521285 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: I0223 13:19:16.521304 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: E0223 13:19:16.521327 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: I0223 13:19:16.521341 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: E0223 13:19:16.521365 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521433 master-0 kubenswrapper[17411]: I0223 13:19:16.521377 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521878 master-0 kubenswrapper[17411]: I0223 13:19:16.521621 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521878 master-0 kubenswrapper[17411]: I0223 13:19:16.521651 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.521878 master-0 kubenswrapper[17411]: I0223 13:19:16.521671 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.522068 master-0 kubenswrapper[17411]: E0223 13:19:16.521930 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.522068 master-0 kubenswrapper[17411]: I0223 13:19:16.521948 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.522290 master-0 kubenswrapper[17411]: I0223 13:19:16.522214 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 23 13:19:16.524435 master-0 kubenswrapper[17411]: I0223 13:19:16.524389 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.559369 master-0 kubenswrapper[17411]: I0223 13:19:16.559290 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.559546 master-0 kubenswrapper[17411]: I0223 13:19:16.559473 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.661092 master-0 kubenswrapper[17411]: I0223 13:19:16.661002 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.661309 master-0 kubenswrapper[17411]: I0223 13:19:16.661123 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.661309 master-0 kubenswrapper[17411]: I0223 13:19:16.661223 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.661485 master-0 kubenswrapper[17411]: I0223 13:19:16.661331 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ba52e0b53256909a31799a5101ae42-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"01ba52e0b53256909a31799a5101ae42\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.811915 master-0 kubenswrapper[17411]: I0223 13:19:16.811694 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:19:16.817508 master-0 kubenswrapper[17411]: I0223 13:19:16.817430 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:16.839085 master-0 kubenswrapper[17411]: I0223 13:19:16.838014 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 23 13:19:16.909002 master-0 kubenswrapper[17411]: I0223 13:19:16.908933 17411 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 23 13:19:16.946753 master-0 kubenswrapper[17411]: I0223 13:19:16.946083 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:19:16.946753 master-0 kubenswrapper[17411]: I0223 13:19:16.946147 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:19:16.946753 master-0 kubenswrapper[17411]: I0223 13:19:16.946166 17411 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f6df0d0f-2538-4368-8abb-d08349f326f0" Feb 23 13:19:16.946753 master-0 kubenswrapper[17411]: I0223 13:19:16.946184 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:19:16.946753 master-0 kubenswrapper[17411]: I0223 13:19:16.946197 17411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f6df0d0f-2538-4368-8abb-d08349f326f0" Feb 23 13:19:16.954319 master-0 kubenswrapper[17411]: I0223 13:19:16.952394 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:19:16.958998 master-0 kubenswrapper[17411]: I0223 13:19:16.958932 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 23 13:19:16.958998 master-0 kubenswrapper[17411]: I0223 13:19:16.958989 17411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f6df0d0f-2538-4368-8abb-d08349f326f0" Feb 23 13:19:16.972395 master-0 kubenswrapper[17411]: I0223 13:19:16.972349 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 23 13:19:16.972522 master-0 kubenswrapper[17411]: I0223 13:19:16.972501 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 23 13:19:16.973131 master-0 kubenswrapper[17411]: I0223 13:19:16.973105 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:19:16.973203 master-0 kubenswrapper[17411]: I0223 13:19:16.973145 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:19:17.078533 master-0 kubenswrapper[17411]: I0223 13:19:17.078379 17411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.078533 master-0 kubenswrapper[17411]: I0223 13:19:17.078432 17411 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:17.461177 master-0 kubenswrapper[17411]: I0223 13:19:17.460866 17411 generic.go:334] "Generic (PLEG): container finished" podID="6b09bcbe-cbfb-4348-9dc7-74508f7cd592" containerID="bb06bc0bbb808218af72312a94b42989998d5e23102e6dcb8788f173329fca28" exitCode=0 Feb 23 13:19:17.461177 master-0 kubenswrapper[17411]: I0223 13:19:17.461038 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6b09bcbe-cbfb-4348-9dc7-74508f7cd592","Type":"ContainerDied","Data":"bb06bc0bbb808218af72312a94b42989998d5e23102e6dcb8788f173329fca28"} Feb 23 13:19:17.465768 master-0 kubenswrapper[17411]: I0223 13:19:17.464116 17411 generic.go:334] "Generic (PLEG): container finished" podID="01ba52e0b53256909a31799a5101ae42" containerID="3fc25b708d2f273dbc23007da3d10ec082eaca170ee45de69b5ed187a5798472" exitCode=0 Feb 23 13:19:17.465768 master-0 kubenswrapper[17411]: I0223 13:19:17.464215 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"01ba52e0b53256909a31799a5101ae42","Type":"ContainerDied","Data":"3fc25b708d2f273dbc23007da3d10ec082eaca170ee45de69b5ed187a5798472"} Feb 23 13:19:17.465768 master-0 kubenswrapper[17411]: I0223 13:19:17.464274 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"01ba52e0b53256909a31799a5101ae42","Type":"ContainerStarted","Data":"0474940eff0e646916011288ba2d48a6191088094bb6b2176b467980a70d504b"} Feb 23 13:19:17.468530 master-0 kubenswrapper[17411]: I0223 13:19:17.468293 17411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6" exitCode=0 Feb 23 13:19:17.468530 master-0 kubenswrapper[17411]: I0223 13:19:17.468420 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 23 13:19:17.468530 master-0 kubenswrapper[17411]: I0223 13:19:17.468440 17411 scope.go:117] "RemoveContainer" containerID="dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6" Feb 23 13:19:17.477002 master-0 kubenswrapper[17411]: I0223 13:19:17.476949 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 23 13:19:17.510882 master-0 kubenswrapper[17411]: I0223 13:19:17.510829 17411 scope.go:117] "RemoveContainer" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" Feb 23 13:19:17.575084 master-0 kubenswrapper[17411]: I0223 13:19:17.574570 17411 scope.go:117] "RemoveContainer" containerID="dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6" Feb 23 13:19:17.576057 master-0 kubenswrapper[17411]: E0223 13:19:17.575490 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6\": container with ID starting with dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6 not found: ID does not exist" containerID="dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6" Feb 23 13:19:17.576057 master-0 kubenswrapper[17411]: I0223 13:19:17.575541 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6"} err="failed to get container status \"dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6\": rpc error: code = NotFound desc = could not find container \"dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6\": container with ID starting with dab90e48a0b2b25e9dfb9a1cb8ff587e6984c200818710e360d313c2da167aa6 not found: ID does not exist" Feb 23 13:19:17.576057 master-0 kubenswrapper[17411]: I0223 13:19:17.575608 17411 scope.go:117] "RemoveContainer" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" Feb 23 13:19:17.576373 master-0 kubenswrapper[17411]: E0223 13:19:17.576045 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7\": container with ID starting with a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7 not found: ID does not exist" containerID="a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7" Feb 23 13:19:17.576373 master-0 kubenswrapper[17411]: I0223 13:19:17.576121 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7"} err="failed to get container status \"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7\": rpc error: code = NotFound desc = could not find container \"a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7\": container with ID starting with a91825da018e7f69655e040c7dcd7e56e056b143e3598d668e0bf39ad5a544f7 not found: ID does not exist" Feb 23 13:19:18.489268 master-0 kubenswrapper[17411]: I0223 13:19:18.488317 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"01ba52e0b53256909a31799a5101ae42","Type":"ContainerStarted","Data":"b0a8fcb9633c014f87d423b01cb24a5eb77e1d398077cd0e8446b7641e4fab29"} Feb 23 13:19:18.489268 master-0 kubenswrapper[17411]: I0223 13:19:18.488381 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"01ba52e0b53256909a31799a5101ae42","Type":"ContainerStarted","Data":"5b99d7be0b977705e67688989a1c2901b8ebe3217da57b977e0a6378ee97bf3a"} Feb 23 13:19:18.489268 master-0 kubenswrapper[17411]: I0223 13:19:18.488399 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"01ba52e0b53256909a31799a5101ae42","Type":"ContainerStarted","Data":"d28fd034d655737e9609b7101d3654db91decfde43c3f86b0350dd5ecf1eeaae"} Feb 23 13:19:18.493263 master-0 kubenswrapper[17411]: I0223 13:19:18.489626 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:19:18.526349 master-0 kubenswrapper[17411]: I0223 13:19:18.525593 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.525568767 podStartE2EDuration="2.525568767s" podCreationTimestamp="2026-02-23 13:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:19:18.520258166 +0000 UTC m=+751.947764763" watchObservedRunningTime="2026-02-23 13:19:18.525568767 +0000 UTC m=+751.953075374" Feb 23 13:19:18.849978 master-0 kubenswrapper[17411]: I0223 13:19:18.849923 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:19:18.894443 master-0 kubenswrapper[17411]: I0223 13:19:18.888323 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 23 13:19:18.917206 master-0 kubenswrapper[17411]: I0223 13:19:18.917124 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock\") pod \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " Feb 23 13:19:18.917206 master-0 kubenswrapper[17411]: I0223 13:19:18.917201 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir\") pod \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " Feb 23 13:19:18.917505 master-0 kubenswrapper[17411]: I0223 13:19:18.917257 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access\") pod \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\" (UID: \"6b09bcbe-cbfb-4348-9dc7-74508f7cd592\") " Feb 23 13:19:18.917994 master-0 kubenswrapper[17411]: I0223 13:19:18.917927 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock" (OuterVolumeSpecName: "var-lock") pod "6b09bcbe-cbfb-4348-9dc7-74508f7cd592" (UID: "6b09bcbe-cbfb-4348-9dc7-74508f7cd592"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:19:18.918071 master-0 kubenswrapper[17411]: I0223 13:19:18.918041 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6b09bcbe-cbfb-4348-9dc7-74508f7cd592" (UID: "6b09bcbe-cbfb-4348-9dc7-74508f7cd592"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:19:18.935961 master-0 kubenswrapper[17411]: I0223 13:19:18.935882 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6b09bcbe-cbfb-4348-9dc7-74508f7cd592" (UID: "6b09bcbe-cbfb-4348-9dc7-74508f7cd592"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:19:19.022590 master-0 kubenswrapper[17411]: I0223 13:19:19.020165 17411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:19.022590 master-0 kubenswrapper[17411]: I0223 13:19:19.020220 17411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:19.022590 master-0 kubenswrapper[17411]: I0223 13:19:19.020271 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b09bcbe-cbfb-4348-9dc7-74508f7cd592-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 23 13:19:19.506880 master-0 kubenswrapper[17411]: I0223 13:19:19.506754 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6b09bcbe-cbfb-4348-9dc7-74508f7cd592","Type":"ContainerDied","Data":"d2c55908af843df1acc389dd1bc4b370ce1f793c1c1d18f791d9defa3f39b8f0"} Feb 23 13:19:19.506880 master-0 kubenswrapper[17411]: I0223 13:19:19.506844 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 23 13:19:19.506880 master-0 kubenswrapper[17411]: I0223 13:19:19.506886 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2c55908af843df1acc389dd1bc4b370ce1f793c1c1d18f791d9defa3f39b8f0" Feb 23 13:20:06.826377 master-0 kubenswrapper[17411]: I0223 13:20:06.826307 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 23 13:21:19.041002 master-0 kubenswrapper[17411]: I0223 13:21:19.040892 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46dxp"] Feb 23 13:21:19.042362 master-0 kubenswrapper[17411]: E0223 13:21:19.041459 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b09bcbe-cbfb-4348-9dc7-74508f7cd592" containerName="installer" Feb 23 13:21:19.042362 master-0 kubenswrapper[17411]: I0223 13:21:19.041484 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b09bcbe-cbfb-4348-9dc7-74508f7cd592" containerName="installer" Feb 23 13:21:19.042362 master-0 kubenswrapper[17411]: I0223 13:21:19.041787 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b09bcbe-cbfb-4348-9dc7-74508f7cd592" containerName="installer" Feb 23 13:21:19.042682 master-0 kubenswrapper[17411]: I0223 13:21:19.042626 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.045621 master-0 kubenswrapper[17411]: I0223 13:21:19.045569 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 23 13:21:19.207685 master-0 kubenswrapper[17411]: I0223 13:21:19.207602 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.207685 master-0 kubenswrapper[17411]: I0223 13:21:19.207675 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.208064 master-0 kubenswrapper[17411]: I0223 13:21:19.207793 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.208490 master-0 kubenswrapper[17411]: I0223 13:21:19.208427 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr465\" (UniqueName: \"kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.310572 master-0 kubenswrapper[17411]: I0223 13:21:19.310379 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.310572 master-0 kubenswrapper[17411]: I0223 13:21:19.310550 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.310835 master-0 kubenswrapper[17411]: I0223 13:21:19.310631 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.310835 master-0 kubenswrapper[17411]: I0223 13:21:19.310704 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr465\" (UniqueName: \"kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.310970 master-0 kubenswrapper[17411]: I0223 13:21:19.310905 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.311345 master-0 kubenswrapper[17411]: I0223 13:21:19.311290 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.311643 master-0 kubenswrapper[17411]: I0223 13:21:19.311620 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.327862 master-0 kubenswrapper[17411]: I0223 13:21:19.327789 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr465\" (UniqueName: \"kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465\") pod \"cni-sysctl-allowlist-ds-46dxp\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.393117 master-0 kubenswrapper[17411]: I0223 13:21:19.393052 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:19.749535 master-0 kubenswrapper[17411]: I0223 13:21:19.749444 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" event={"ID":"5ed5ee95-4638-4512-abb9-efad2f49dc19","Type":"ContainerStarted","Data":"4e9c9ccddfe80c8d8f0111a71b970a11c0b8efc0d3cff8734f6c98541b7874e0"} Feb 23 13:21:20.096117 master-0 kubenswrapper[17411]: I0223 13:21:20.094446 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:21:20.096117 master-0 kubenswrapper[17411]: I0223 13:21:20.095779 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.117275 master-0 kubenswrapper[17411]: I0223 13:21:20.114648 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 13:21:20.121837 master-0 kubenswrapper[17411]: I0223 13:21:20.121776 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:21:20.226171 master-0 kubenswrapper[17411]: I0223 13:21:20.226107 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcc9\" (UniqueName: \"kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226171 master-0 kubenswrapper[17411]: I0223 13:21:20.226167 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226421 master-0 kubenswrapper[17411]: I0223 13:21:20.226224 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226542 master-0 kubenswrapper[17411]: I0223 13:21:20.226486 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226588 master-0 kubenswrapper[17411]: I0223 13:21:20.226575 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226674 master-0 kubenswrapper[17411]: I0223 13:21:20.226648 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.226723 master-0 kubenswrapper[17411]: I0223 13:21:20.226678 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328536 master-0 kubenswrapper[17411]: I0223 13:21:20.328473 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328536 master-0 kubenswrapper[17411]: I0223 13:21:20.328533 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328796 master-0 kubenswrapper[17411]: I0223 13:21:20.328572 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328796 master-0 kubenswrapper[17411]: I0223 13:21:20.328595 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328796 master-0 kubenswrapper[17411]: I0223 13:21:20.328666 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcc9\" (UniqueName: \"kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328796 master-0 kubenswrapper[17411]: I0223 13:21:20.328693 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.328796 master-0 kubenswrapper[17411]: I0223 13:21:20.328733 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.330362 master-0 kubenswrapper[17411]: I0223 13:21:20.330325 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.330435 master-0 kubenswrapper[17411]: I0223 13:21:20.330409 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.330509 master-0 kubenswrapper[17411]: I0223 13:21:20.330474 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.330565 master-0 kubenswrapper[17411]: I0223 13:21:20.330535 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.332951 master-0 kubenswrapper[17411]: I0223 13:21:20.332878 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.335657 master-0 kubenswrapper[17411]: I0223 13:21:20.335623 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.347393 master-0 kubenswrapper[17411]: I0223 13:21:20.347312 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcc9\" (UniqueName: \"kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9\") pod \"console-5b9778d748-nlz5s\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.433728 master-0 kubenswrapper[17411]: I0223 13:21:20.433645 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:20.759717 master-0 kubenswrapper[17411]: I0223 13:21:20.759596 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" event={"ID":"5ed5ee95-4638-4512-abb9-efad2f49dc19","Type":"ContainerStarted","Data":"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d"} Feb 23 13:21:20.760387 master-0 kubenswrapper[17411]: I0223 13:21:20.759901 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:20.782400 master-0 kubenswrapper[17411]: I0223 13:21:20.782299 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" podStartSLOduration=1.782232478 podStartE2EDuration="1.782232478s" podCreationTimestamp="2026-02-23 13:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:20.779965393 +0000 UTC m=+874.207472010" watchObservedRunningTime="2026-02-23 13:21:20.782232478 +0000 UTC m=+874.209739075" Feb 23 13:21:20.791659 master-0 kubenswrapper[17411]: I0223 13:21:20.791600 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:20.921476 master-0 kubenswrapper[17411]: W0223 13:21:20.921409 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276116f1_ec73_4615_9607_8f29b379ea85.slice/crio-f3bb8e3ca9fe67de07db523f52ba8fb40c4b66df886daa376e0632e91c11d585 WatchSource:0}: Error finding container f3bb8e3ca9fe67de07db523f52ba8fb40c4b66df886daa376e0632e91c11d585: Status 404 returned error can't find the container with id f3bb8e3ca9fe67de07db523f52ba8fb40c4b66df886daa376e0632e91c11d585 Feb 23 13:21:20.921848 master-0 kubenswrapper[17411]: I0223 13:21:20.921783 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:21:21.018975 master-0 kubenswrapper[17411]: I0223 13:21:21.018902 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46dxp"] Feb 23 13:21:21.767849 master-0 kubenswrapper[17411]: I0223 13:21:21.767783 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b9778d748-nlz5s" event={"ID":"276116f1-ec73-4615-9607-8f29b379ea85","Type":"ContainerStarted","Data":"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0"} Feb 23 13:21:21.768615 master-0 kubenswrapper[17411]: I0223 13:21:21.768407 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b9778d748-nlz5s" event={"ID":"276116f1-ec73-4615-9607-8f29b379ea85","Type":"ContainerStarted","Data":"f3bb8e3ca9fe67de07db523f52ba8fb40c4b66df886daa376e0632e91c11d585"} Feb 23 13:21:21.814218 master-0 kubenswrapper[17411]: I0223 13:21:21.814003 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b9778d748-nlz5s" podStartSLOduration=1.8139729500000001 podStartE2EDuration="1.81397295s" podCreationTimestamp="2026-02-23 13:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:21.801681081 +0000 UTC m=+875.229187718" watchObservedRunningTime="2026-02-23 13:21:21.81397295 +0000 UTC m=+875.241479557" Feb 23 13:21:22.778078 master-0 kubenswrapper[17411]: I0223 13:21:22.777954 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" gracePeriod=30 Feb 23 13:21:26.852101 master-0 kubenswrapper[17411]: I0223 13:21:26.852027 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-687c57c5d-h54d9"] Feb 23 13:21:26.854138 master-0 kubenswrapper[17411]: I0223 13:21:26.854094 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.856600 master-0 kubenswrapper[17411]: I0223 13:21:26.856536 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 23 13:21:26.856931 master-0 kubenswrapper[17411]: I0223 13:21:26.856693 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 23 13:21:26.859511 master-0 kubenswrapper[17411]: I0223 13:21:26.859448 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 23 13:21:26.864279 master-0 kubenswrapper[17411]: I0223 13:21:26.864214 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 23 13:21:26.864501 master-0 kubenswrapper[17411]: I0223 13:21:26.864467 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 23 13:21:26.868344 master-0 kubenswrapper[17411]: I0223 13:21:26.868294 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 23 13:21:26.900904 master-0 kubenswrapper[17411]: I0223 13:21:26.900850 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-687c57c5d-h54d9"] Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946771 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-federate-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946835 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-metrics-client-ca\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946853 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946913 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946943 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946978 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqlx\" (UniqueName: \"kubernetes.io/projected/1073e746-e716-4415-b1c8-6db23b75e17d-kube-api-access-tpqlx\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.946998 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:26.948804 master-0 kubenswrapper[17411]: I0223 13:21:26.947018 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-serving-certs-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.048944 master-0 kubenswrapper[17411]: I0223 13:21:27.048869 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-federate-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.048944 master-0 kubenswrapper[17411]: I0223 13:21:27.048929 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-metrics-client-ca\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.049232 master-0 kubenswrapper[17411]: I0223 13:21:27.049159 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.049378 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.049421 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.049473 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqlx\" (UniqueName: \"kubernetes.io/projected/1073e746-e716-4415-b1c8-6db23b75e17d-kube-api-access-tpqlx\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.049515 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.049542 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-serving-certs-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.050989 master-0 kubenswrapper[17411]: I0223 13:21:27.050707 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-serving-certs-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.051227 master-0 kubenswrapper[17411]: I0223 13:21:27.051007 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-metrics-client-ca\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.051827 master-0 kubenswrapper[17411]: I0223 13:21:27.051754 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.053610 master-0 kubenswrapper[17411]: I0223 13:21:27.053404 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-federate-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.053610 master-0 kubenswrapper[17411]: I0223 13:21:27.053410 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.054909 master-0 kubenswrapper[17411]: I0223 13:21:27.054884 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-telemeter-client-tls\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.055735 master-0 kubenswrapper[17411]: I0223 13:21:27.055050 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1073e746-e716-4415-b1c8-6db23b75e17d-secret-telemeter-client\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.068284 master-0 kubenswrapper[17411]: I0223 13:21:27.067473 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqlx\" (UniqueName: \"kubernetes.io/projected/1073e746-e716-4415-b1c8-6db23b75e17d-kube-api-access-tpqlx\") pod \"telemeter-client-687c57c5d-h54d9\" (UID: \"1073e746-e716-4415-b1c8-6db23b75e17d\") " pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.176884 master-0 kubenswrapper[17411]: I0223 13:21:27.176739 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" Feb 23 13:21:27.674611 master-0 kubenswrapper[17411]: I0223 13:21:27.674549 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-687c57c5d-h54d9"] Feb 23 13:21:27.676222 master-0 kubenswrapper[17411]: I0223 13:21:27.676172 17411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:21:27.830137 master-0 kubenswrapper[17411]: I0223 13:21:27.830033 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" event={"ID":"1073e746-e716-4415-b1c8-6db23b75e17d","Type":"ContainerStarted","Data":"46444a6e408419d526f080a1e15371c306debbcdcd6b5e8a7775a151e2153de3"} Feb 23 13:21:29.396282 master-0 kubenswrapper[17411]: E0223 13:21:29.396200 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:29.398256 master-0 kubenswrapper[17411]: E0223 13:21:29.398189 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:29.399578 master-0 kubenswrapper[17411]: E0223 13:21:29.399516 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:29.399645 master-0 kubenswrapper[17411]: E0223 13:21:29.399614 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:21:29.850505 master-0 kubenswrapper[17411]: I0223 13:21:29.850434 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" event={"ID":"1073e746-e716-4415-b1c8-6db23b75e17d","Type":"ContainerStarted","Data":"2ac4298d1097802b1a18123f54d926e8d635535888c9095b8e10b079b24d2dd1"} Feb 23 13:21:30.434567 master-0 kubenswrapper[17411]: I0223 13:21:30.434346 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:30.434567 master-0 kubenswrapper[17411]: I0223 13:21:30.434455 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:30.442530 master-0 kubenswrapper[17411]: I0223 13:21:30.442458 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:30.867552 master-0 kubenswrapper[17411]: I0223 13:21:30.867435 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" event={"ID":"1073e746-e716-4415-b1c8-6db23b75e17d","Type":"ContainerStarted","Data":"86bdb33db981fb0dee89099f6da2bab84f96a43ed6fd1500626937792391374d"} Feb 23 13:21:30.867552 master-0 kubenswrapper[17411]: I0223 13:21:30.867526 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" event={"ID":"1073e746-e716-4415-b1c8-6db23b75e17d","Type":"ContainerStarted","Data":"007b78592eb57b4f8ab4480ed097ec7d2fd7dd20e907e8645642a40cc337e720"} Feb 23 13:21:30.895814 master-0 kubenswrapper[17411]: I0223 13:21:30.895711 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:21:30.911270 master-0 kubenswrapper[17411]: I0223 13:21:30.911162 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-687c57c5d-h54d9" podStartSLOduration=2.97591781 podStartE2EDuration="4.911147009s" podCreationTimestamp="2026-02-23 13:21:26 +0000 UTC" firstStartedPulling="2026-02-23 13:21:27.676105271 +0000 UTC m=+881.103611908" lastFinishedPulling="2026-02-23 13:21:29.61133448 +0000 UTC m=+883.038841107" observedRunningTime="2026-02-23 13:21:30.91046624 +0000 UTC m=+884.337972867" watchObservedRunningTime="2026-02-23 13:21:30.911147009 +0000 UTC m=+884.338653606" Feb 23 13:21:31.046000 master-0 kubenswrapper[17411]: I0223 13:21:31.045879 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:21:31.736530 master-0 kubenswrapper[17411]: I0223 13:21:31.736474 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:21:31.737813 master-0 kubenswrapper[17411]: I0223 13:21:31.737787 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.765788 master-0 kubenswrapper[17411]: I0223 13:21:31.765729 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766019 master-0 kubenswrapper[17411]: I0223 13:21:31.765809 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766019 master-0 kubenswrapper[17411]: I0223 13:21:31.765858 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766019 master-0 kubenswrapper[17411]: I0223 13:21:31.765880 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766019 master-0 kubenswrapper[17411]: I0223 13:21:31.765900 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766019 master-0 kubenswrapper[17411]: I0223 13:21:31.765959 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.766280 master-0 kubenswrapper[17411]: I0223 13:21:31.766051 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2rb\" (UniqueName: \"kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868238 master-0 kubenswrapper[17411]: I0223 13:21:31.868163 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868532 master-0 kubenswrapper[17411]: I0223 13:21:31.868425 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868648 master-0 kubenswrapper[17411]: I0223 13:21:31.868609 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868719 master-0 kubenswrapper[17411]: I0223 13:21:31.868656 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868770 master-0 kubenswrapper[17411]: I0223 13:21:31.868725 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868770 master-0 kubenswrapper[17411]: I0223 13:21:31.868759 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2rb\" (UniqueName: \"kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.868874 master-0 kubenswrapper[17411]: I0223 13:21:31.868822 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.870074 master-0 kubenswrapper[17411]: I0223 13:21:31.870034 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.870317 master-0 kubenswrapper[17411]: I0223 13:21:31.870260 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.872122 master-0 kubenswrapper[17411]: I0223 13:21:31.870556 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.872122 master-0 kubenswrapper[17411]: I0223 13:21:31.870553 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.872850 master-0 kubenswrapper[17411]: I0223 13:21:31.872804 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.875349 master-0 kubenswrapper[17411]: I0223 13:21:31.875299 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:31.887995 master-0 kubenswrapper[17411]: I0223 13:21:31.887927 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:21:31.917188 master-0 kubenswrapper[17411]: I0223 13:21:31.917128 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2rb\" (UniqueName: \"kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb\") pod \"console-78988746df-4zq9k\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:32.056417 master-0 kubenswrapper[17411]: I0223 13:21:32.056273 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:32.341291 master-0 kubenswrapper[17411]: I0223 13:21:32.341118 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:21:32.376193 master-0 kubenswrapper[17411]: I0223 13:21:32.375657 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-544ff8764f-zdxz4"] Feb 23 13:21:32.386000 master-0 kubenswrapper[17411]: I0223 13:21:32.385930 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544ff8764f-zdxz4"] Feb 23 13:21:32.386298 master-0 kubenswrapper[17411]: I0223 13:21:32.386086 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481304 master-0 kubenswrapper[17411]: I0223 13:21:32.481217 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481510 master-0 kubenswrapper[17411]: I0223 13:21:32.481327 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481510 master-0 kubenswrapper[17411]: I0223 13:21:32.481460 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481586 master-0 kubenswrapper[17411]: I0223 13:21:32.481524 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzlb\" (UniqueName: \"kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481586 master-0 kubenswrapper[17411]: I0223 13:21:32.481573 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481684 master-0 kubenswrapper[17411]: I0223 13:21:32.481658 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.481771 master-0 kubenswrapper[17411]: I0223 13:21:32.481742 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.494705 master-0 kubenswrapper[17411]: I0223 13:21:32.494647 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:21:32.502122 master-0 kubenswrapper[17411]: I0223 13:21:32.502094 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544ff8764f-zdxz4"] Feb 23 13:21:32.502732 master-0 kubenswrapper[17411]: E0223 13:21:32.502518 17411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-config console-oauth-config console-serving-cert kube-api-access-mfzlb oauth-serving-cert service-ca trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-544ff8764f-zdxz4" podUID="3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" Feb 23 13:21:32.537218 master-0 kubenswrapper[17411]: I0223 13:21:32.537152 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:21:32.539059 master-0 kubenswrapper[17411]: I0223 13:21:32.539022 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.548783 master-0 kubenswrapper[17411]: I0223 13:21:32.548737 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584426 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584492 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584522 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584564 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584613 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584639 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzlb\" (UniqueName: \"kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584662 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584701 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584725 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584755 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584776 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584816 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584856 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.585331 master-0 kubenswrapper[17411]: I0223 13:21:32.584882 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgzd\" (UniqueName: \"kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.588783 master-0 kubenswrapper[17411]: I0223 13:21:32.588750 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.590192 master-0 kubenswrapper[17411]: I0223 13:21:32.589393 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.590192 master-0 kubenswrapper[17411]: I0223 13:21:32.589945 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.590192 master-0 kubenswrapper[17411]: I0223 13:21:32.590135 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.591209 master-0 kubenswrapper[17411]: I0223 13:21:32.591155 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.594310 master-0 kubenswrapper[17411]: I0223 13:21:32.594194 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.614656 master-0 kubenswrapper[17411]: I0223 13:21:32.614589 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzlb\" (UniqueName: \"kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb\") pod \"console-544ff8764f-zdxz4\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.687064 master-0 kubenswrapper[17411]: I0223 13:21:32.686991 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687290 master-0 kubenswrapper[17411]: I0223 13:21:32.687167 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687335 master-0 kubenswrapper[17411]: I0223 13:21:32.687275 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687546 master-0 kubenswrapper[17411]: I0223 13:21:32.687506 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687598 master-0 kubenswrapper[17411]: I0223 13:21:32.687576 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687678 master-0 kubenswrapper[17411]: I0223 13:21:32.687642 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgzd\" (UniqueName: \"kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.687721 master-0 kubenswrapper[17411]: I0223 13:21:32.687694 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.688013 master-0 kubenswrapper[17411]: I0223 13:21:32.687974 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.688592 master-0 kubenswrapper[17411]: I0223 13:21:32.688544 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.688902 master-0 kubenswrapper[17411]: I0223 13:21:32.688854 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.689268 master-0 kubenswrapper[17411]: I0223 13:21:32.689219 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.692346 master-0 kubenswrapper[17411]: I0223 13:21:32.692306 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.693192 master-0 kubenswrapper[17411]: I0223 13:21:32.693143 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.817342 master-0 kubenswrapper[17411]: I0223 13:21:32.817208 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgzd\" (UniqueName: \"kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd\") pod \"console-6bbdbf64dd-7jcx8\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.891202 master-0 kubenswrapper[17411]: I0223 13:21:32.891020 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.893818 master-0 kubenswrapper[17411]: I0223 13:21:32.893609 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78988746df-4zq9k" event={"ID":"09988a22-4301-4f22-9dea-2b00d94d1ad4","Type":"ContainerStarted","Data":"e98eee0f3da5c26fe7126c873a58156f3bdb5d3ceff34b16d94afb222a5f0f97"} Feb 23 13:21:32.893818 master-0 kubenswrapper[17411]: I0223 13:21:32.893728 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78988746df-4zq9k" event={"ID":"09988a22-4301-4f22-9dea-2b00d94d1ad4","Type":"ContainerStarted","Data":"d063debd4be7d35b15669971a393c233144c92324a2ad0c3e2d95bd920d5405a"} Feb 23 13:21:32.938233 master-0 kubenswrapper[17411]: I0223 13:21:32.938133 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:32.940284 master-0 kubenswrapper[17411]: I0223 13:21:32.940167 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:32.951272 master-0 kubenswrapper[17411]: I0223 13:21:32.951160 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78988746df-4zq9k" podStartSLOduration=1.9511212850000001 podStartE2EDuration="1.951121285s" podCreationTimestamp="2026-02-23 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:32.93826707 +0000 UTC m=+886.365773707" watchObservedRunningTime="2026-02-23 13:21:32.951121285 +0000 UTC m=+886.378627892" Feb 23 13:21:32.997025 master-0 kubenswrapper[17411]: I0223 13:21:32.996655 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzlb\" (UniqueName: \"kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997025 master-0 kubenswrapper[17411]: I0223 13:21:32.996734 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997025 master-0 kubenswrapper[17411]: I0223 13:21:32.996754 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997025 master-0 kubenswrapper[17411]: I0223 13:21:32.996837 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997025 master-0 kubenswrapper[17411]: I0223 13:21:32.996876 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997380 master-0 kubenswrapper[17411]: I0223 13:21:32.997047 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.997380 master-0 kubenswrapper[17411]: I0223 13:21:32.997101 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert\") pod \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\" (UID: \"3a711c8e-63d0-405d-833d-ea5cd7fb8a2e\") " Feb 23 13:21:32.998492 master-0 kubenswrapper[17411]: I0223 13:21:32.998418 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config" (OuterVolumeSpecName: "console-config") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:32.998492 master-0 kubenswrapper[17411]: I0223 13:21:32.998486 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:32.999043 master-0 kubenswrapper[17411]: I0223 13:21:32.999006 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca" (OuterVolumeSpecName: "service-ca") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:33.000921 master-0 kubenswrapper[17411]: I0223 13:21:33.000160 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:33.000921 master-0 kubenswrapper[17411]: I0223 13:21:33.000575 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb" (OuterVolumeSpecName: "kube-api-access-mfzlb") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "kube-api-access-mfzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:33.004809 master-0 kubenswrapper[17411]: I0223 13:21:33.004342 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:33.005347 master-0 kubenswrapper[17411]: I0223 13:21:33.005222 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" (UID: "3a711c8e-63d0-405d-833d-ea5cd7fb8a2e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100127 17411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100175 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100186 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100195 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100206 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfzlb\" (UniqueName: \"kubernetes.io/projected/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-kube-api-access-mfzlb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100215 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.100208 master-0 kubenswrapper[17411]: I0223 13:21:33.100224 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:33.431376 master-0 kubenswrapper[17411]: I0223 13:21:33.431301 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:21:33.902367 master-0 kubenswrapper[17411]: I0223 13:21:33.902222 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ff8764f-zdxz4" Feb 23 13:21:33.902367 master-0 kubenswrapper[17411]: I0223 13:21:33.902222 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bbdbf64dd-7jcx8" event={"ID":"d3a25543-83b2-444a-955f-5c0cc8ee65ec","Type":"ContainerStarted","Data":"f2b8eb4a6b96999453be22eb34e81205b38cdebc80739719b0d7581c55022473"} Feb 23 13:21:33.902367 master-0 kubenswrapper[17411]: I0223 13:21:33.902370 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bbdbf64dd-7jcx8" event={"ID":"d3a25543-83b2-444a-955f-5c0cc8ee65ec","Type":"ContainerStarted","Data":"7f944120f6edbf7e69ddb386f189836910453375c696b43ba2fce2312bfc2fe9"} Feb 23 13:21:33.945853 master-0 kubenswrapper[17411]: I0223 13:21:33.945736 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bbdbf64dd-7jcx8" podStartSLOduration=1.94570463 podStartE2EDuration="1.94570463s" podCreationTimestamp="2026-02-23 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:33.929681425 +0000 UTC m=+887.357188062" watchObservedRunningTime="2026-02-23 13:21:33.94570463 +0000 UTC m=+887.373211237" Feb 23 13:21:34.003590 master-0 kubenswrapper[17411]: I0223 13:21:34.003457 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544ff8764f-zdxz4"] Feb 23 13:21:34.012066 master-0 kubenswrapper[17411]: I0223 13:21:34.012004 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-544ff8764f-zdxz4"] Feb 23 13:21:34.884942 master-0 kubenswrapper[17411]: I0223 13:21:34.884853 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a711c8e-63d0-405d-833d-ea5cd7fb8a2e" path="/var/lib/kubelet/pods/3a711c8e-63d0-405d-833d-ea5cd7fb8a2e/volumes" Feb 23 13:21:36.397066 master-0 kubenswrapper[17411]: I0223 13:21:36.396987 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397360 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="alertmanager" containerID="cri-o://7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" gracePeriod=120 Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397448 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="prom-label-proxy" containerID="cri-o://1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" gracePeriod=120 Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397506 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="config-reloader" containerID="cri-o://d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" gracePeriod=120 Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397514 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-web" containerID="cri-o://cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" gracePeriod=120 Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397448 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy" containerID="cri-o://c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" gracePeriod=120 Feb 23 13:21:36.397768 master-0 kubenswrapper[17411]: I0223 13:21:36.397488 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-metric" containerID="cri-o://18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" gracePeriod=120 Feb 23 13:21:36.955570 master-0 kubenswrapper[17411]: I0223 13:21:36.955439 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" exitCode=0 Feb 23 13:21:36.955570 master-0 kubenswrapper[17411]: I0223 13:21:36.955520 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" exitCode=0 Feb 23 13:21:36.955570 master-0 kubenswrapper[17411]: I0223 13:21:36.955544 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" exitCode=0 Feb 23 13:21:36.955570 master-0 kubenswrapper[17411]: I0223 13:21:36.955578 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" exitCode=0 Feb 23 13:21:36.956304 master-0 kubenswrapper[17411]: I0223 13:21:36.955515 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183"} Feb 23 13:21:36.956717 master-0 kubenswrapper[17411]: I0223 13:21:36.956638 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916"} Feb 23 13:21:36.956717 master-0 kubenswrapper[17411]: I0223 13:21:36.956702 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6"} Feb 23 13:21:36.956932 master-0 kubenswrapper[17411]: I0223 13:21:36.956728 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd"} Feb 23 13:21:37.953485 master-0 kubenswrapper[17411]: I0223 13:21:37.953437 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:37.969148 master-0 kubenswrapper[17411]: I0223 13:21:37.969102 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" exitCode=0 Feb 23 13:21:37.969148 master-0 kubenswrapper[17411]: I0223 13:21:37.969135 17411 generic.go:334] "Generic (PLEG): container finished" podID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerID="cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" exitCode=0 Feb 23 13:21:37.969458 master-0 kubenswrapper[17411]: I0223 13:21:37.969155 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee"} Feb 23 13:21:37.969458 master-0 kubenswrapper[17411]: I0223 13:21:37.969187 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118"} Feb 23 13:21:37.969458 master-0 kubenswrapper[17411]: I0223 13:21:37.969198 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b0e437b4-e6fd-482f-91a2-f48b9f087321","Type":"ContainerDied","Data":"50c7ec1f5ca4265757a29bcd7bd1cb805b067e2d12981dec3cf9d22b61572c34"} Feb 23 13:21:37.969458 master-0 kubenswrapper[17411]: I0223 13:21:37.969250 17411 scope.go:117] "RemoveContainer" containerID="1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" Feb 23 13:21:37.969458 master-0 kubenswrapper[17411]: I0223 13:21:37.969372 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:37.989162 master-0 kubenswrapper[17411]: I0223 13:21:37.989061 17411 scope.go:117] "RemoveContainer" containerID="18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" Feb 23 13:21:38.001075 master-0 kubenswrapper[17411]: I0223 13:21:38.000977 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.001319 master-0 kubenswrapper[17411]: I0223 13:21:38.001174 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.001319 master-0 kubenswrapper[17411]: I0223 13:21:38.001283 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.001594 master-0 kubenswrapper[17411]: I0223 13:21:38.001521 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.001694 master-0 kubenswrapper[17411]: I0223 13:21:38.001650 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.005666 master-0 kubenswrapper[17411]: I0223 13:21:38.005575 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.006069 master-0 kubenswrapper[17411]: I0223 13:21:38.006048 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.006305 master-0 kubenswrapper[17411]: I0223 13:21:38.006286 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.006535 master-0 kubenswrapper[17411]: I0223 13:21:38.006516 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.008064 master-0 kubenswrapper[17411]: I0223 13:21:38.007982 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584sx\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.008387 master-0 kubenswrapper[17411]: I0223 13:21:38.005798 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out" (OuterVolumeSpecName: "config-out") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:38.013523 master-0 kubenswrapper[17411]: I0223 13:21:38.006338 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:38.013782 master-0 kubenswrapper[17411]: I0223 13:21:38.008102 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:38.014103 master-0 kubenswrapper[17411]: I0223 13:21:38.009728 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:38.014318 master-0 kubenswrapper[17411]: I0223 13:21:38.010969 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.014434 master-0 kubenswrapper[17411]: I0223 13:21:38.011415 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.014569 master-0 kubenswrapper[17411]: I0223 13:21:38.008574 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.014749 master-0 kubenswrapper[17411]: I0223 13:21:38.014728 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets\") pod \"b0e437b4-e6fd-482f-91a2-f48b9f087321\" (UID: \"b0e437b4-e6fd-482f-91a2-f48b9f087321\") " Feb 23 13:21:38.018069 master-0 kubenswrapper[17411]: I0223 13:21:38.017505 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.018069 master-0 kubenswrapper[17411]: I0223 13:21:38.017566 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.023180 master-0 kubenswrapper[17411]: I0223 13:21:38.021685 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:38.023180 master-0 kubenswrapper[17411]: I0223 13:21:38.021904 17411 scope.go:117] "RemoveContainer" containerID="c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" Feb 23 13:21:38.023180 master-0 kubenswrapper[17411]: I0223 13:21:38.022975 17411 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-out\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025287 master-0 kubenswrapper[17411]: I0223 13:21:38.025218 17411 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025287 master-0 kubenswrapper[17411]: I0223 13:21:38.025286 17411 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025310 17411 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025333 17411 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025355 17411 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025371 17411 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025387 17411 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.025536 master-0 kubenswrapper[17411]: I0223 13:21:38.025406 17411 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0e437b4-e6fd-482f-91a2-f48b9f087321-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.045209 master-0 kubenswrapper[17411]: I0223 13:21:38.041643 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.049952 master-0 kubenswrapper[17411]: I0223 13:21:38.048551 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx" (OuterVolumeSpecName: "kube-api-access-584sx") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "kube-api-access-584sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:38.087766 master-0 kubenswrapper[17411]: I0223 13:21:38.087710 17411 scope.go:117] "RemoveContainer" containerID="cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" Feb 23 13:21:38.108529 master-0 kubenswrapper[17411]: I0223 13:21:38.108182 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config" (OuterVolumeSpecName: "web-config") pod "b0e437b4-e6fd-482f-91a2-f48b9f087321" (UID: "b0e437b4-e6fd-482f-91a2-f48b9f087321"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:38.110584 master-0 kubenswrapper[17411]: I0223 13:21:38.110150 17411 scope.go:117] "RemoveContainer" containerID="d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" Feb 23 13:21:38.127227 master-0 kubenswrapper[17411]: I0223 13:21:38.127081 17411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.127227 master-0 kubenswrapper[17411]: I0223 13:21:38.127133 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584sx\" (UniqueName: \"kubernetes.io/projected/b0e437b4-e6fd-482f-91a2-f48b9f087321-kube-api-access-584sx\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.127227 master-0 kubenswrapper[17411]: I0223 13:21:38.127147 17411 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0e437b4-e6fd-482f-91a2-f48b9f087321-web-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:38.131295 master-0 kubenswrapper[17411]: I0223 13:21:38.131273 17411 scope.go:117] "RemoveContainer" containerID="7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" Feb 23 13:21:38.147300 master-0 kubenswrapper[17411]: I0223 13:21:38.147226 17411 scope.go:117] "RemoveContainer" containerID="a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0" Feb 23 13:21:38.166669 master-0 kubenswrapper[17411]: I0223 13:21:38.166607 17411 scope.go:117] "RemoveContainer" containerID="1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" Feb 23 13:21:38.168638 master-0 kubenswrapper[17411]: E0223 13:21:38.168599 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183\": container with ID starting with 1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183 not found: ID does not exist" containerID="1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" Feb 23 13:21:38.168746 master-0 kubenswrapper[17411]: I0223 13:21:38.168648 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183"} err="failed to get container status \"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183\": rpc error: code = NotFound desc = could not find container \"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183\": container with ID starting with 1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183 not found: ID does not exist" Feb 23 13:21:38.168746 master-0 kubenswrapper[17411]: I0223 13:21:38.168683 17411 scope.go:117] "RemoveContainer" containerID="18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" Feb 23 13:21:38.169304 master-0 kubenswrapper[17411]: E0223 13:21:38.169232 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee\": container with ID starting with 18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee not found: ID does not exist" containerID="18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" Feb 23 13:21:38.169357 master-0 kubenswrapper[17411]: I0223 13:21:38.169311 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee"} err="failed to get container status \"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee\": rpc error: code = NotFound desc = could not find container \"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee\": container with ID starting with 18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee not found: ID does not exist" Feb 23 13:21:38.169357 master-0 kubenswrapper[17411]: I0223 13:21:38.169348 17411 scope.go:117] "RemoveContainer" containerID="c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" Feb 23 13:21:38.169750 master-0 kubenswrapper[17411]: E0223 13:21:38.169710 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916\": container with ID starting with c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916 not found: ID does not exist" containerID="c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" Feb 23 13:21:38.169799 master-0 kubenswrapper[17411]: I0223 13:21:38.169768 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916"} err="failed to get container status \"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916\": rpc error: code = NotFound desc = could not find container \"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916\": container with ID starting with c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916 not found: ID does not exist" Feb 23 13:21:38.169836 master-0 kubenswrapper[17411]: I0223 13:21:38.169799 17411 scope.go:117] "RemoveContainer" containerID="cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" Feb 23 13:21:38.170131 master-0 kubenswrapper[17411]: E0223 13:21:38.170106 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118\": container with ID starting with cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118 not found: ID does not exist" containerID="cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" Feb 23 13:21:38.170165 master-0 kubenswrapper[17411]: I0223 13:21:38.170132 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118"} err="failed to get container status \"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118\": rpc error: code = NotFound desc = could not find container \"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118\": container with ID starting with cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118 not found: ID does not exist" Feb 23 13:21:38.170165 master-0 kubenswrapper[17411]: I0223 13:21:38.170147 17411 scope.go:117] "RemoveContainer" containerID="d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" Feb 23 13:21:38.171590 master-0 kubenswrapper[17411]: E0223 13:21:38.171556 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6\": container with ID starting with d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6 not found: ID does not exist" containerID="d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" Feb 23 13:21:38.171590 master-0 kubenswrapper[17411]: I0223 13:21:38.171584 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6"} err="failed to get container status \"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6\": rpc error: code = NotFound desc = could not find container \"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6\": container with ID starting with d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6 not found: ID does not exist" Feb 23 13:21:38.171685 master-0 kubenswrapper[17411]: I0223 13:21:38.171601 17411 scope.go:117] "RemoveContainer" containerID="7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" Feb 23 13:21:38.172011 master-0 kubenswrapper[17411]: E0223 13:21:38.171971 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd\": container with ID starting with 7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd not found: ID does not exist" containerID="7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" Feb 23 13:21:38.172059 master-0 kubenswrapper[17411]: I0223 13:21:38.172004 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd"} err="failed to get container status \"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd\": rpc error: code = NotFound desc = could not find container \"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd\": container with ID starting with 7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd not found: ID does not exist" Feb 23 13:21:38.172059 master-0 kubenswrapper[17411]: I0223 13:21:38.172022 17411 scope.go:117] "RemoveContainer" containerID="a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0" Feb 23 13:21:38.172362 master-0 kubenswrapper[17411]: E0223 13:21:38.172327 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0\": container with ID starting with a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0 not found: ID does not exist" containerID="a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0" Feb 23 13:21:38.172451 master-0 kubenswrapper[17411]: I0223 13:21:38.172359 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0"} err="failed to get container status \"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0\": rpc error: code = NotFound desc = could not find container \"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0\": container with ID starting with a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0 not found: ID does not exist" Feb 23 13:21:38.172451 master-0 kubenswrapper[17411]: I0223 13:21:38.172379 17411 scope.go:117] "RemoveContainer" containerID="1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183" Feb 23 13:21:38.172697 master-0 kubenswrapper[17411]: I0223 13:21:38.172659 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183"} err="failed to get container status \"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183\": rpc error: code = NotFound desc = could not find container \"1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183\": container with ID starting with 1a239e1a3b191b48119d76efad646643e88041d1782cb52225b3459aad074183 not found: ID does not exist" Feb 23 13:21:38.172697 master-0 kubenswrapper[17411]: I0223 13:21:38.172690 17411 scope.go:117] "RemoveContainer" containerID="18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee" Feb 23 13:21:38.172973 master-0 kubenswrapper[17411]: I0223 13:21:38.172945 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee"} err="failed to get container status \"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee\": rpc error: code = NotFound desc = could not find container \"18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee\": container with ID starting with 18d9d2d3cdc48e8cde039877627e6ae5376d3299d962fca8eb1ad7eb08db92ee not found: ID does not exist" Feb 23 13:21:38.172973 master-0 kubenswrapper[17411]: I0223 13:21:38.172968 17411 scope.go:117] "RemoveContainer" containerID="c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916" Feb 23 13:21:38.173219 master-0 kubenswrapper[17411]: I0223 13:21:38.173184 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916"} err="failed to get container status \"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916\": rpc error: code = NotFound desc = could not find container \"c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916\": container with ID starting with c85a869dc3b510368b7c17fbd1c92e88cda9d7dce6c76089bf5a49bbf80ca916 not found: ID does not exist" Feb 23 13:21:38.173219 master-0 kubenswrapper[17411]: I0223 13:21:38.173212 17411 scope.go:117] "RemoveContainer" containerID="cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118" Feb 23 13:21:38.173500 master-0 kubenswrapper[17411]: I0223 13:21:38.173472 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118"} err="failed to get container status \"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118\": rpc error: code = NotFound desc = could not find container \"cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118\": container with ID starting with cc5d4e4e1012918d04e8300a79e253f19d1856b10efd5150647ebb34b74b0118 not found: ID does not exist" Feb 23 13:21:38.173500 master-0 kubenswrapper[17411]: I0223 13:21:38.173495 17411 scope.go:117] "RemoveContainer" containerID="d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6" Feb 23 13:21:38.174539 master-0 kubenswrapper[17411]: I0223 13:21:38.174490 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6"} err="failed to get container status \"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6\": rpc error: code = NotFound desc = could not find container \"d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6\": container with ID starting with d82c2caa6d63b59ffaea4a29e5e293ba85715fdd28a64f88ff09b0784f4e00e6 not found: ID does not exist" Feb 23 13:21:38.174613 master-0 kubenswrapper[17411]: I0223 13:21:38.174544 17411 scope.go:117] "RemoveContainer" containerID="7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd" Feb 23 13:21:38.174833 master-0 kubenswrapper[17411]: I0223 13:21:38.174806 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd"} err="failed to get container status \"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd\": rpc error: code = NotFound desc = could not find container \"7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd\": container with ID starting with 7825e1326ab726cfcb7bef4a3b7289794c010d36ff727f0bc5103fdcd74f9ffd not found: ID does not exist" Feb 23 13:21:38.174833 master-0 kubenswrapper[17411]: I0223 13:21:38.174826 17411 scope.go:117] "RemoveContainer" containerID="a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0" Feb 23 13:21:38.175089 master-0 kubenswrapper[17411]: I0223 13:21:38.175065 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0"} err="failed to get container status \"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0\": rpc error: code = NotFound desc = could not find container \"a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0\": container with ID starting with a1518d2c87645a3c09769971e3ae5e92fdc8b04d9aac2ee3e3442011c20c6db0 not found: ID does not exist" Feb 23 13:21:38.326432 master-0 kubenswrapper[17411]: I0223 13:21:38.326291 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:38.333511 master-0 kubenswrapper[17411]: I0223 13:21:38.333372 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:38.361079 master-0 kubenswrapper[17411]: I0223 13:21:38.360987 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:38.361516 master-0 kubenswrapper[17411]: E0223 13:21:38.361471 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="config-reloader" Feb 23 13:21:38.361516 master-0 kubenswrapper[17411]: I0223 13:21:38.361503 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="config-reloader" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: E0223 13:21:38.361542 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-metric" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: I0223 13:21:38.361558 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-metric" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: E0223 13:21:38.361579 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="alertmanager" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: I0223 13:21:38.361592 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="alertmanager" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: E0223 13:21:38.361628 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="prom-label-proxy" Feb 23 13:21:38.361659 master-0 kubenswrapper[17411]: I0223 13:21:38.361641 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="prom-label-proxy" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: E0223 13:21:38.361670 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-web" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: I0223 13:21:38.361683 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-web" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: E0223 13:21:38.361721 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: I0223 13:21:38.361734 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: E0223 13:21:38.361754 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="init-config-reloader" Feb 23 13:21:38.361966 master-0 kubenswrapper[17411]: I0223 13:21:38.361766 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="init-config-reloader" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.361996 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-metric" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.362028 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="prom-label-proxy" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.362054 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.362078 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="config-reloader" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.362119 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="kube-rbac-proxy-web" Feb 23 13:21:38.362322 master-0 kubenswrapper[17411]: I0223 13:21:38.362138 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" containerName="alertmanager" Feb 23 13:21:38.367171 master-0 kubenswrapper[17411]: I0223 13:21:38.367111 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.370985 master-0 kubenswrapper[17411]: I0223 13:21:38.370919 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wnzv6" Feb 23 13:21:38.371118 master-0 kubenswrapper[17411]: I0223 13:21:38.371010 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 23 13:21:38.371118 master-0 kubenswrapper[17411]: I0223 13:21:38.371106 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 23 13:21:38.371290 master-0 kubenswrapper[17411]: I0223 13:21:38.371108 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 23 13:21:38.371594 master-0 kubenswrapper[17411]: I0223 13:21:38.371518 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 23 13:21:38.371691 master-0 kubenswrapper[17411]: I0223 13:21:38.371605 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 23 13:21:38.371760 master-0 kubenswrapper[17411]: I0223 13:21:38.371702 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 23 13:21:38.373526 master-0 kubenswrapper[17411]: I0223 13:21:38.373489 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 23 13:21:38.380903 master-0 kubenswrapper[17411]: I0223 13:21:38.380767 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 23 13:21:38.386498 master-0 kubenswrapper[17411]: I0223 13:21:38.386443 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:38.432764 master-0 kubenswrapper[17411]: I0223 13:21:38.432607 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433024 master-0 kubenswrapper[17411]: I0223 13:21:38.432850 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433024 master-0 kubenswrapper[17411]: I0223 13:21:38.432917 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-web-config\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433024 master-0 kubenswrapper[17411]: I0223 13:21:38.432987 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433160 master-0 kubenswrapper[17411]: I0223 13:21:38.433060 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433284 master-0 kubenswrapper[17411]: I0223 13:21:38.433120 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433349 master-0 kubenswrapper[17411]: I0223 13:21:38.433319 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433477 master-0 kubenswrapper[17411]: I0223 13:21:38.433427 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5tvs\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-kube-api-access-v5tvs\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433562 master-0 kubenswrapper[17411]: I0223 13:21:38.433525 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-out\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.433686 master-0 kubenswrapper[17411]: I0223 13:21:38.433638 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.434067 master-0 kubenswrapper[17411]: I0223 13:21:38.433990 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.434229 master-0 kubenswrapper[17411]: I0223 13:21:38.434192 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.536475 master-0 kubenswrapper[17411]: I0223 13:21:38.536388 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.536779 master-0 kubenswrapper[17411]: I0223 13:21:38.536644 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.536987 master-0 kubenswrapper[17411]: I0223 13:21:38.536921 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5tvs\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-kube-api-access-v5tvs\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537339 master-0 kubenswrapper[17411]: I0223 13:21:38.537271 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-out\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537473 master-0 kubenswrapper[17411]: I0223 13:21:38.537405 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537584 master-0 kubenswrapper[17411]: I0223 13:21:38.537500 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537662 master-0 kubenswrapper[17411]: I0223 13:21:38.537599 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537732 master-0 kubenswrapper[17411]: I0223 13:21:38.537688 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537795 master-0 kubenswrapper[17411]: I0223 13:21:38.537745 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537862 master-0 kubenswrapper[17411]: I0223 13:21:38.537800 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-web-config\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537946 master-0 kubenswrapper[17411]: I0223 13:21:38.537860 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.537946 master-0 kubenswrapper[17411]: I0223 13:21:38.537921 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.538508 master-0 kubenswrapper[17411]: I0223 13:21:38.538454 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.538661 master-0 kubenswrapper[17411]: I0223 13:21:38.538536 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.538768 master-0 kubenswrapper[17411]: I0223 13:21:38.538722 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.543012 master-0 kubenswrapper[17411]: I0223 13:21:38.542953 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.543538 master-0 kubenswrapper[17411]: I0223 13:21:38.543467 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-out\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.544923 master-0 kubenswrapper[17411]: I0223 13:21:38.544843 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.545036 master-0 kubenswrapper[17411]: I0223 13:21:38.544925 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.545734 master-0 kubenswrapper[17411]: I0223 13:21:38.545684 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.546046 master-0 kubenswrapper[17411]: I0223 13:21:38.545971 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-web-config\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.546847 master-0 kubenswrapper[17411]: I0223 13:21:38.546774 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.548865 master-0 kubenswrapper[17411]: I0223 13:21:38.548804 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.574735 master-0 kubenswrapper[17411]: I0223 13:21:38.574641 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5tvs\" (UniqueName: \"kubernetes.io/projected/951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc-kube-api-access-v5tvs\") pod \"alertmanager-main-0\" (UID: \"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.703913 master-0 kubenswrapper[17411]: I0223 13:21:38.703833 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 13:21:38.886970 master-0 kubenswrapper[17411]: I0223 13:21:38.886888 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e437b4-e6fd-482f-91a2-f48b9f087321" path="/var/lib/kubelet/pods/b0e437b4-e6fd-482f-91a2-f48b9f087321/volumes" Feb 23 13:21:39.281180 master-0 kubenswrapper[17411]: I0223 13:21:39.281110 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 23 13:21:39.290460 master-0 kubenswrapper[17411]: W0223 13:21:39.290340 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod951c4db9_c2d8_43e4_9fc0_36f4c7f3e1dc.slice/crio-01c9b1aab7f1988e03bce9dc4020f0cfbe8ca1382ac571c3e6d2c91f58897f5b WatchSource:0}: Error finding container 01c9b1aab7f1988e03bce9dc4020f0cfbe8ca1382ac571c3e6d2c91f58897f5b: Status 404 returned error can't find the container with id 01c9b1aab7f1988e03bce9dc4020f0cfbe8ca1382ac571c3e6d2c91f58897f5b Feb 23 13:21:39.394931 master-0 kubenswrapper[17411]: E0223 13:21:39.394867 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:39.396074 master-0 kubenswrapper[17411]: E0223 13:21:39.395934 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:39.397620 master-0 kubenswrapper[17411]: E0223 13:21:39.397541 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:39.397690 master-0 kubenswrapper[17411]: E0223 13:21:39.397638 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:21:40.003586 master-0 kubenswrapper[17411]: I0223 13:21:40.003518 17411 generic.go:334] "Generic (PLEG): container finished" podID="951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc" containerID="7d99796a02d8cf5265ef1112e3fc66baae1cdb4955b6fc07d48eafffe229c9cd" exitCode=0 Feb 23 13:21:40.003586 master-0 kubenswrapper[17411]: I0223 13:21:40.003591 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerDied","Data":"7d99796a02d8cf5265ef1112e3fc66baae1cdb4955b6fc07d48eafffe229c9cd"} Feb 23 13:21:40.004039 master-0 kubenswrapper[17411]: I0223 13:21:40.003634 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"01c9b1aab7f1988e03bce9dc4020f0cfbe8ca1382ac571c3e6d2c91f58897f5b"} Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.852994 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.853443 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="prometheus" containerID="cri-o://df3847509227b18cfa2057df9af88aeb7bbc0404ce6befb7751bd3e07fced95b" gracePeriod=600 Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.853752 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy" containerID="cri-o://848dd18f30dfd1e2f1024adae59eb6e05998671f920f766e813b8325be190abb" gracePeriod=600 Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.853893 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-web" containerID="cri-o://6724674d6284fdd05381b7d0daef8a39a226e4c324110414bcbc6793e5bd3d5f" gracePeriod=600 Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.853949 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="thanos-sidecar" containerID="cri-o://7355347876eb6f26645282da59b2039fa5f5bf7c99724e7e85490f25fa53bd9d" gracePeriod=600 Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.853996 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="config-reloader" containerID="cri-o://379c5ee6081cf25cf74b27ad60c344645f271de34631a4c85b7eae36a346bc1d" gracePeriod=600 Feb 23 13:21:40.854391 master-0 kubenswrapper[17411]: I0223 13:21:40.854059 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-thanos" containerID="cri-o://21d478f17ab841facc6af3c11882e409ca6a5733c3567c73c296122b45bd2178" gracePeriod=600 Feb 23 13:21:41.033965 master-0 kubenswrapper[17411]: I0223 13:21:41.033916 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"89f9f5e73090df84e258fd5277e6f1fb91e3cd500ce3a93fb69d02545954a67a"} Feb 23 13:21:41.033965 master-0 kubenswrapper[17411]: I0223 13:21:41.033969 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"5b6b159ce5abff692865c2f21f4b7a7747b6caf5321b12f505d30e3b6f168fc2"} Feb 23 13:21:41.034236 master-0 kubenswrapper[17411]: I0223 13:21:41.033986 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"59cf45e6d14dc46b1b539221a2b4d2b3f5faa8bd7c70bdaff99cb1f029f5aa8c"} Feb 23 13:21:41.034236 master-0 kubenswrapper[17411]: I0223 13:21:41.034002 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"58d397dc666da3c362864eb136c56a62687205d2cc8b9ec26b7e93f1abdf990b"} Feb 23 13:21:41.034236 master-0 kubenswrapper[17411]: I0223 13:21:41.034014 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"4a0d38276b4080e1d226a8e2fcfb938e9e7fcdbb7accf36c2b67d4278b5c16c9"} Feb 23 13:21:41.039623 master-0 kubenswrapper[17411]: I0223 13:21:41.039581 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="21d478f17ab841facc6af3c11882e409ca6a5733c3567c73c296122b45bd2178" exitCode=0 Feb 23 13:21:41.039623 master-0 kubenswrapper[17411]: I0223 13:21:41.039617 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="848dd18f30dfd1e2f1024adae59eb6e05998671f920f766e813b8325be190abb" exitCode=0 Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039625 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="6724674d6284fdd05381b7d0daef8a39a226e4c324110414bcbc6793e5bd3d5f" exitCode=0 Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039632 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="7355347876eb6f26645282da59b2039fa5f5bf7c99724e7e85490f25fa53bd9d" exitCode=0 Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039642 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="379c5ee6081cf25cf74b27ad60c344645f271de34631a4c85b7eae36a346bc1d" exitCode=0 Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039649 17411 generic.go:334] "Generic (PLEG): container finished" podID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerID="df3847509227b18cfa2057df9af88aeb7bbc0404ce6befb7751bd3e07fced95b" exitCode=0 Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039661 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"21d478f17ab841facc6af3c11882e409ca6a5733c3567c73c296122b45bd2178"} Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039694 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"848dd18f30dfd1e2f1024adae59eb6e05998671f920f766e813b8325be190abb"} Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039710 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"6724674d6284fdd05381b7d0daef8a39a226e4c324110414bcbc6793e5bd3d5f"} Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039723 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"7355347876eb6f26645282da59b2039fa5f5bf7c99724e7e85490f25fa53bd9d"} Feb 23 13:21:41.039737 master-0 kubenswrapper[17411]: I0223 13:21:41.039735 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"379c5ee6081cf25cf74b27ad60c344645f271de34631a4c85b7eae36a346bc1d"} Feb 23 13:21:41.040000 master-0 kubenswrapper[17411]: I0223 13:21:41.039751 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"df3847509227b18cfa2057df9af88aeb7bbc0404ce6befb7751bd3e07fced95b"} Feb 23 13:21:41.271983 master-0 kubenswrapper[17411]: I0223 13:21:41.271927 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:41.390152 master-0 kubenswrapper[17411]: I0223 13:21:41.390021 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390152 master-0 kubenswrapper[17411]: I0223 13:21:41.390096 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390152 master-0 kubenswrapper[17411]: I0223 13:21:41.390120 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390152 master-0 kubenswrapper[17411]: I0223 13:21:41.390136 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390453 master-0 kubenswrapper[17411]: I0223 13:21:41.390230 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390453 master-0 kubenswrapper[17411]: I0223 13:21:41.390323 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390878 master-0 kubenswrapper[17411]: I0223 13:21:41.390833 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390929 master-0 kubenswrapper[17411]: I0223 13:21:41.390897 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390929 master-0 kubenswrapper[17411]: I0223 13:21:41.390924 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.390994 master-0 kubenswrapper[17411]: I0223 13:21:41.390974 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391048 master-0 kubenswrapper[17411]: I0223 13:21:41.391029 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391084 master-0 kubenswrapper[17411]: I0223 13:21:41.391065 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391149 master-0 kubenswrapper[17411]: I0223 13:21:41.391126 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391195 master-0 kubenswrapper[17411]: I0223 13:21:41.391166 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391195 master-0 kubenswrapper[17411]: I0223 13:21:41.391189 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391270 master-0 kubenswrapper[17411]: I0223 13:21:41.391216 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391302 master-0 kubenswrapper[17411]: I0223 13:21:41.391280 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391344 master-0 kubenswrapper[17411]: I0223 13:21:41.391326 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wmmh\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh\") pod \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\" (UID: \"c229faa3-6eb1-42d6-8e10-f4cadc952d17\") " Feb 23 13:21:41.391866 master-0 kubenswrapper[17411]: I0223 13:21:41.391823 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:41.392028 master-0 kubenswrapper[17411]: I0223 13:21:41.392010 17411 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.393230 master-0 kubenswrapper[17411]: I0223 13:21:41.393191 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.393985 master-0 kubenswrapper[17411]: I0223 13:21:41.393961 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:41.394086 master-0 kubenswrapper[17411]: I0223 13:21:41.393977 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.394152 master-0 kubenswrapper[17411]: I0223 13:21:41.394016 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.394227 master-0 kubenswrapper[17411]: I0223 13:21:41.394125 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:41.394527 master-0 kubenswrapper[17411]: I0223 13:21:41.394482 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:41.394613 master-0 kubenswrapper[17411]: I0223 13:21:41.394577 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:41.395438 master-0 kubenswrapper[17411]: I0223 13:21:41.395402 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh" (OuterVolumeSpecName: "kube-api-access-7wmmh") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "kube-api-access-7wmmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:41.395498 master-0 kubenswrapper[17411]: I0223 13:21:41.395406 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.395539 master-0 kubenswrapper[17411]: I0223 13:21:41.395493 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.396187 master-0 kubenswrapper[17411]: I0223 13:21:41.396155 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config" (OuterVolumeSpecName: "config") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.396358 master-0 kubenswrapper[17411]: I0223 13:21:41.396205 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.396754 master-0 kubenswrapper[17411]: I0223 13:21:41.396717 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:41.397231 master-0 kubenswrapper[17411]: I0223 13:21:41.397043 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.397231 master-0 kubenswrapper[17411]: I0223 13:21:41.397205 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:41.397516 master-0 kubenswrapper[17411]: I0223 13:21:41.397471 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out" (OuterVolumeSpecName: "config-out") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:41.450090 master-0 kubenswrapper[17411]: I0223 13:21:41.450019 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config" (OuterVolumeSpecName: "web-config") pod "c229faa3-6eb1-42d6-8e10-f4cadc952d17" (UID: "c229faa3-6eb1-42d6-8e10-f4cadc952d17"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493502 17411 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493549 17411 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493562 17411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493574 17411 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493588 17411 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-config-out\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493600 17411 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.493591 master-0 kubenswrapper[17411]: I0223 13:21:41.493613 17411 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493626 17411 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493641 17411 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493656 17411 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493669 17411 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493682 17411 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c229faa3-6eb1-42d6-8e10-f4cadc952d17-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493694 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wmmh\" (UniqueName: \"kubernetes.io/projected/c229faa3-6eb1-42d6-8e10-f4cadc952d17-kube-api-access-7wmmh\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493709 17411 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-web-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493720 17411 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c229faa3-6eb1-42d6-8e10-f4cadc952d17-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493731 17411 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:41.494350 master-0 kubenswrapper[17411]: I0223 13:21:41.493742 17411 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c229faa3-6eb1-42d6-8e10-f4cadc952d17-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:42.056538 master-0 kubenswrapper[17411]: I0223 13:21:42.056477 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:42.058140 master-0 kubenswrapper[17411]: I0223 13:21:42.058051 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"951c4db9-c2d8-43e4-9fc0-36f4c7f3e1dc","Type":"ContainerStarted","Data":"de7ff5cdfdccc1c4830f34ded52c16fe86f37c338ae8cd5c4959106ad5054e97"} Feb 23 13:21:42.065653 master-0 kubenswrapper[17411]: I0223 13:21:42.065577 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c229faa3-6eb1-42d6-8e10-f4cadc952d17","Type":"ContainerDied","Data":"fcfc88379baf23d7a87fa2f79e200ec61bdbcac138e571974b4701a1640fa7af"} Feb 23 13:21:42.065846 master-0 kubenswrapper[17411]: I0223 13:21:42.065663 17411 scope.go:117] "RemoveContainer" containerID="21d478f17ab841facc6af3c11882e409ca6a5733c3567c73c296122b45bd2178" Feb 23 13:21:42.065846 master-0 kubenswrapper[17411]: I0223 13:21:42.065675 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.093121 master-0 kubenswrapper[17411]: I0223 13:21:42.092891 17411 scope.go:117] "RemoveContainer" containerID="848dd18f30dfd1e2f1024adae59eb6e05998671f920f766e813b8325be190abb" Feb 23 13:21:42.128853 master-0 kubenswrapper[17411]: I0223 13:21:42.128731 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.128703417 podStartE2EDuration="4.128703417s" podCreationTimestamp="2026-02-23 13:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:42.106548008 +0000 UTC m=+895.534054695" watchObservedRunningTime="2026-02-23 13:21:42.128703417 +0000 UTC m=+895.556210024" Feb 23 13:21:42.161952 master-0 kubenswrapper[17411]: I0223 13:21:42.160993 17411 scope.go:117] "RemoveContainer" containerID="6724674d6284fdd05381b7d0daef8a39a226e4c324110414bcbc6793e5bd3d5f" Feb 23 13:21:42.176426 master-0 kubenswrapper[17411]: I0223 13:21:42.176354 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:42.190493 master-0 kubenswrapper[17411]: I0223 13:21:42.190430 17411 scope.go:117] "RemoveContainer" containerID="7355347876eb6f26645282da59b2039fa5f5bf7c99724e7e85490f25fa53bd9d" Feb 23 13:21:42.194171 master-0 kubenswrapper[17411]: I0223 13:21:42.193748 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214356 17411 scope.go:117] "RemoveContainer" containerID="379c5ee6081cf25cf74b27ad60c344645f271de34631a4c85b7eae36a346bc1d" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214542 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.214871 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-web" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214887 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-web" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.214910 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="prometheus" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214919 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="prometheus" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.214934 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="thanos-sidecar" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214944 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="thanos-sidecar" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.214962 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="init-config-reloader" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214970 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="init-config-reloader" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.214988 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-thanos" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.214997 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-thanos" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.215020 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215029 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: E0223 13:21:42.215038 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="config-reloader" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215046 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="config-reloader" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215226 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="thanos-sidecar" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215267 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="prometheus" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215285 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="config-reloader" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215311 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-thanos" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215325 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy-web" Feb 23 13:21:42.216190 master-0 kubenswrapper[17411]: I0223 13:21:42.215341 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" containerName="kube-rbac-proxy" Feb 23 13:21:42.218739 master-0 kubenswrapper[17411]: I0223 13:21:42.218715 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.221609 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.221967 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222029 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222188 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222305 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222360 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7q6an9sqsfn51" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222619 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 23 13:21:42.222956 master-0 kubenswrapper[17411]: I0223 13:21:42.222625 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 23 13:21:42.223414 master-0 kubenswrapper[17411]: I0223 13:21:42.223008 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-54m2k" Feb 23 13:21:42.225646 master-0 kubenswrapper[17411]: I0223 13:21:42.225383 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 23 13:21:42.228717 master-0 kubenswrapper[17411]: I0223 13:21:42.227273 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 23 13:21:42.228717 master-0 kubenswrapper[17411]: I0223 13:21:42.227476 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 23 13:21:42.233624 master-0 kubenswrapper[17411]: I0223 13:21:42.233567 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 23 13:21:42.243616 master-0 kubenswrapper[17411]: I0223 13:21:42.243555 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:42.248628 master-0 kubenswrapper[17411]: I0223 13:21:42.248591 17411 scope.go:117] "RemoveContainer" containerID="df3847509227b18cfa2057df9af88aeb7bbc0404ce6befb7751bd3e07fced95b" Feb 23 13:21:42.265611 master-0 kubenswrapper[17411]: I0223 13:21:42.265135 17411 scope.go:117] "RemoveContainer" containerID="1e2c8bf2649bb83ebb59ccefe68f87d1cbf2774db7c0e989383bc2b02c2dea7b" Feb 23 13:21:42.308702 master-0 kubenswrapper[17411]: I0223 13:21:42.308526 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-web-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.308702 master-0 kubenswrapper[17411]: I0223 13:21:42.308594 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.308702 master-0 kubenswrapper[17411]: I0223 13:21:42.308677 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309044 master-0 kubenswrapper[17411]: I0223 13:21:42.308858 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309122 master-0 kubenswrapper[17411]: I0223 13:21:42.309078 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309178 master-0 kubenswrapper[17411]: I0223 13:21:42.309127 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309221 master-0 kubenswrapper[17411]: I0223 13:21:42.309189 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309298 master-0 kubenswrapper[17411]: I0223 13:21:42.309232 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309350 master-0 kubenswrapper[17411]: I0223 13:21:42.309309 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309350 master-0 kubenswrapper[17411]: I0223 13:21:42.309332 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config-out\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309487 master-0 kubenswrapper[17411]: I0223 13:21:42.309355 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309702 master-0 kubenswrapper[17411]: I0223 13:21:42.309591 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309702 master-0 kubenswrapper[17411]: I0223 13:21:42.309641 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309702 master-0 kubenswrapper[17411]: I0223 13:21:42.309680 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309852 master-0 kubenswrapper[17411]: I0223 13:21:42.309751 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309852 master-0 kubenswrapper[17411]: I0223 13:21:42.309783 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnvx\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-kube-api-access-phnvx\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309933 master-0 kubenswrapper[17411]: I0223 13:21:42.309861 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.309933 master-0 kubenswrapper[17411]: I0223 13:21:42.309894 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412098 master-0 kubenswrapper[17411]: I0223 13:21:42.412046 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412373 master-0 kubenswrapper[17411]: I0223 13:21:42.412357 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config-out\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412473 master-0 kubenswrapper[17411]: I0223 13:21:42.412458 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412597 master-0 kubenswrapper[17411]: I0223 13:21:42.412582 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412695 master-0 kubenswrapper[17411]: I0223 13:21:42.412677 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.412840 master-0 kubenswrapper[17411]: I0223 13:21:42.412824 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413022 master-0 kubenswrapper[17411]: I0223 13:21:42.412999 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413231 master-0 kubenswrapper[17411]: I0223 13:21:42.412670 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413231 master-0 kubenswrapper[17411]: I0223 13:21:42.413189 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnvx\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-kube-api-access-phnvx\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413610 master-0 kubenswrapper[17411]: I0223 13:21:42.413584 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413686 master-0 kubenswrapper[17411]: I0223 13:21:42.413640 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413719 master-0 kubenswrapper[17411]: I0223 13:21:42.413683 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413894 master-0 kubenswrapper[17411]: I0223 13:21:42.413860 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-web-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413980 master-0 kubenswrapper[17411]: I0223 13:21:42.413913 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.413980 master-0 kubenswrapper[17411]: I0223 13:21:42.413948 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.414089 master-0 kubenswrapper[17411]: I0223 13:21:42.413994 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.414322 master-0 kubenswrapper[17411]: I0223 13:21:42.414302 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.414467 master-0 kubenswrapper[17411]: I0223 13:21:42.414449 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.414617 master-0 kubenswrapper[17411]: I0223 13:21:42.414596 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.414760 master-0 kubenswrapper[17411]: I0223 13:21:42.414730 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.415394 master-0 kubenswrapper[17411]: I0223 13:21:42.414484 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.415724 master-0 kubenswrapper[17411]: I0223 13:21:42.415683 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.416416 master-0 kubenswrapper[17411]: I0223 13:21:42.416361 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.416622 master-0 kubenswrapper[17411]: I0223 13:21:42.416528 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config-out\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.417156 master-0 kubenswrapper[17411]: I0223 13:21:42.417015 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.417668 master-0 kubenswrapper[17411]: I0223 13:21:42.417646 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.419031 master-0 kubenswrapper[17411]: I0223 13:21:42.418515 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.419898 master-0 kubenswrapper[17411]: I0223 13:21:42.419843 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.421066 master-0 kubenswrapper[17411]: I0223 13:21:42.421016 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.421263 master-0 kubenswrapper[17411]: I0223 13:21:42.421220 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.421852 master-0 kubenswrapper[17411]: I0223 13:21:42.421807 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-web-config\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.423047 master-0 kubenswrapper[17411]: I0223 13:21:42.423009 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.424368 master-0 kubenswrapper[17411]: I0223 13:21:42.424321 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.425513 master-0 kubenswrapper[17411]: I0223 13:21:42.425476 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.431160 master-0 kubenswrapper[17411]: I0223 13:21:42.431128 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnvx\" (UniqueName: \"kubernetes.io/projected/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-kube-api-access-phnvx\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.431295 master-0 kubenswrapper[17411]: I0223 13:21:42.431211 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ab5dd751-c26b-4ac9-8408-86adb8d86a5f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ab5dd751-c26b-4ac9-8408-86adb8d86a5f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.550669 master-0 kubenswrapper[17411]: I0223 13:21:42.550602 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:42.879451 master-0 kubenswrapper[17411]: I0223 13:21:42.878792 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c229faa3-6eb1-42d6-8e10-f4cadc952d17" path="/var/lib/kubelet/pods/c229faa3-6eb1-42d6-8e10-f4cadc952d17/volumes" Feb 23 13:21:42.939447 master-0 kubenswrapper[17411]: I0223 13:21:42.939368 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:42.939447 master-0 kubenswrapper[17411]: I0223 13:21:42.939448 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:42.944871 master-0 kubenswrapper[17411]: I0223 13:21:42.944812 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:43.009199 master-0 kubenswrapper[17411]: I0223 13:21:43.009071 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 23 13:21:43.021172 master-0 kubenswrapper[17411]: W0223 13:21:43.021105 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5dd751_c26b_4ac9_8408_86adb8d86a5f.slice/crio-cb59b1a8ff33ee1fa42fd64392347c14247efb7df4c077ee12ff7d157bda5f4f WatchSource:0}: Error finding container cb59b1a8ff33ee1fa42fd64392347c14247efb7df4c077ee12ff7d157bda5f4f: Status 404 returned error can't find the container with id cb59b1a8ff33ee1fa42fd64392347c14247efb7df4c077ee12ff7d157bda5f4f Feb 23 13:21:43.077553 master-0 kubenswrapper[17411]: I0223 13:21:43.077503 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"cb59b1a8ff33ee1fa42fd64392347c14247efb7df4c077ee12ff7d157bda5f4f"} Feb 23 13:21:43.084668 master-0 kubenswrapper[17411]: I0223 13:21:43.084589 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:21:43.254987 master-0 kubenswrapper[17411]: I0223 13:21:43.254906 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:21:44.093071 master-0 kubenswrapper[17411]: I0223 13:21:44.092998 17411 generic.go:334] "Generic (PLEG): container finished" podID="ab5dd751-c26b-4ac9-8408-86adb8d86a5f" containerID="dcff0fda67e91be2e287364af2c79c39711b809cb3c0fcaea05bc00a956232fe" exitCode=0 Feb 23 13:21:44.094132 master-0 kubenswrapper[17411]: I0223 13:21:44.093104 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerDied","Data":"dcff0fda67e91be2e287364af2c79c39711b809cb3c0fcaea05bc00a956232fe"} Feb 23 13:21:45.103535 master-0 kubenswrapper[17411]: I0223 13:21:45.103446 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"a0fc81e1041a664cb2ed593797e915d4c017de9501f5da27b7f01021ae959733"} Feb 23 13:21:45.103535 master-0 kubenswrapper[17411]: I0223 13:21:45.103520 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"83f59de51472c8bff769903e023c1cdc178185fe7c99a97ec19abd9909c67a44"} Feb 23 13:21:45.103535 master-0 kubenswrapper[17411]: I0223 13:21:45.103536 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"b5425f710af93c05973d65808502ed492554fd1a545fcf6f9b00fbb6698dfc85"} Feb 23 13:21:45.103535 master-0 kubenswrapper[17411]: I0223 13:21:45.103546 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"fb1d0e2c4bd3e4159893cedbd3a5eb35bedc4acb1671af421e22483ec9178c39"} Feb 23 13:21:46.014523 master-0 kubenswrapper[17411]: I0223 13:21:46.014423 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-kcxdv"] Feb 23 13:21:46.015618 master-0 kubenswrapper[17411]: I0223 13:21:46.015559 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.019139 master-0 kubenswrapper[17411]: I0223 13:21:46.019051 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 13:21:46.019363 master-0 kubenswrapper[17411]: I0223 13:21:46.019140 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 13:21:46.029643 master-0 kubenswrapper[17411]: I0223 13:21:46.029540 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-kcxdv"] Feb 23 13:21:46.117555 master-0 kubenswrapper[17411]: I0223 13:21:46.117478 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"85e658a4f7cec33172e9abc174ae01c0c7ced12abdf9cf4716fe5ca7d8d3209e"} Feb 23 13:21:46.117555 master-0 kubenswrapper[17411]: I0223 13:21:46.117543 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ab5dd751-c26b-4ac9-8408-86adb8d86a5f","Type":"ContainerStarted","Data":"c415d7b0f31071afa8c5ac0f4ccc50c4f3fc2f7c1aa5a9f95f0bff7e390bb15c"} Feb 23 13:21:46.132782 master-0 kubenswrapper[17411]: I0223 13:21:46.132663 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/47ec3a69-3b8c-4ef8-8458-a864f12c1536-nginx-conf\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.132937 master-0 kubenswrapper[17411]: I0223 13:21:46.132849 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.182417 master-0 kubenswrapper[17411]: I0223 13:21:46.182235 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.182196575 podStartE2EDuration="4.182196575s" podCreationTimestamp="2026-02-23 13:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:46.15630505 +0000 UTC m=+899.583811757" watchObservedRunningTime="2026-02-23 13:21:46.182196575 +0000 UTC m=+899.609703212" Feb 23 13:21:46.235409 master-0 kubenswrapper[17411]: I0223 13:21:46.235286 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.235713 master-0 kubenswrapper[17411]: I0223 13:21:46.235668 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/47ec3a69-3b8c-4ef8-8458-a864f12c1536-nginx-conf\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.236420 master-0 kubenswrapper[17411]: E0223 13:21:46.236373 17411 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 23 13:21:46.236505 master-0 kubenswrapper[17411]: E0223 13:21:46.236444 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert podName:47ec3a69-3b8c-4ef8-8458-a864f12c1536 nodeName:}" failed. No retries permitted until 2026-02-23 13:21:46.736424857 +0000 UTC m=+900.163931454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-kcxdv" (UID: "47ec3a69-3b8c-4ef8-8458-a864f12c1536") : secret "networking-console-plugin-cert" not found Feb 23 13:21:46.240006 master-0 kubenswrapper[17411]: I0223 13:21:46.239931 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/47ec3a69-3b8c-4ef8-8458-a864f12c1536-nginx-conf\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.426053 master-0 kubenswrapper[17411]: I0223 13:21:46.425916 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:21:46.427153 master-0 kubenswrapper[17411]: I0223 13:21:46.427103 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.440695 master-0 kubenswrapper[17411]: I0223 13:21:46.440506 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:21:46.540644 master-0 kubenswrapper[17411]: I0223 13:21:46.540576 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcxb\" (UniqueName: \"kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.540866 master-0 kubenswrapper[17411]: I0223 13:21:46.540710 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.541005 master-0 kubenswrapper[17411]: I0223 13:21:46.540960 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.541202 master-0 kubenswrapper[17411]: I0223 13:21:46.541152 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.541290 master-0 kubenswrapper[17411]: I0223 13:21:46.541227 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.541393 master-0 kubenswrapper[17411]: I0223 13:21:46.541342 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.541436 master-0 kubenswrapper[17411]: I0223 13:21:46.541392 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643058 master-0 kubenswrapper[17411]: I0223 13:21:46.642946 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcxb\" (UniqueName: \"kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643058 master-0 kubenswrapper[17411]: I0223 13:21:46.643050 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643508 master-0 kubenswrapper[17411]: I0223 13:21:46.643146 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643508 master-0 kubenswrapper[17411]: I0223 13:21:46.643218 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643508 master-0 kubenswrapper[17411]: I0223 13:21:46.643426 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643508 master-0 kubenswrapper[17411]: I0223 13:21:46.643482 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.643508 master-0 kubenswrapper[17411]: I0223 13:21:46.643502 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.644298 master-0 kubenswrapper[17411]: I0223 13:21:46.644213 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.644298 master-0 kubenswrapper[17411]: I0223 13:21:46.644288 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.644487 master-0 kubenswrapper[17411]: I0223 13:21:46.644306 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.644571 master-0 kubenswrapper[17411]: I0223 13:21:46.644550 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.646826 master-0 kubenswrapper[17411]: I0223 13:21:46.646772 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.648983 master-0 kubenswrapper[17411]: I0223 13:21:46.648919 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.661368 master-0 kubenswrapper[17411]: I0223 13:21:46.661321 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcxb\" (UniqueName: \"kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb\") pod \"console-7cdf5bf6fc-ws9gr\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.746292 master-0 kubenswrapper[17411]: I0223 13:21:46.746121 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.751926 master-0 kubenswrapper[17411]: I0223 13:21:46.751842 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/47ec3a69-3b8c-4ef8-8458-a864f12c1536-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-kcxdv\" (UID: \"47ec3a69-3b8c-4ef8-8458-a864f12c1536\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:46.764350 master-0 kubenswrapper[17411]: I0223 13:21:46.764230 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:46.956757 master-0 kubenswrapper[17411]: I0223 13:21:46.956676 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" Feb 23 13:21:47.237500 master-0 kubenswrapper[17411]: I0223 13:21:47.237447 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:21:47.250061 master-0 kubenswrapper[17411]: W0223 13:21:47.249990 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0c0578_9329_492f_9453_9503d4007aa3.slice/crio-7c1bc202949f7cae9f66b14a83c9bff346d77ad8f376cd40e1db2449cd741fc1 WatchSource:0}: Error finding container 7c1bc202949f7cae9f66b14a83c9bff346d77ad8f376cd40e1db2449cd741fc1: Status 404 returned error can't find the container with id 7c1bc202949f7cae9f66b14a83c9bff346d77ad8f376cd40e1db2449cd741fc1 Feb 23 13:21:47.406173 master-0 kubenswrapper[17411]: I0223 13:21:47.405669 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-kcxdv"] Feb 23 13:21:47.410383 master-0 kubenswrapper[17411]: W0223 13:21:47.410320 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ec3a69_3b8c_4ef8_8458_a864f12c1536.slice/crio-3801681d2c81373da76b0f993bbde3045c314068ad594bf7860f02057b774e8e WatchSource:0}: Error finding container 3801681d2c81373da76b0f993bbde3045c314068ad594bf7860f02057b774e8e: Status 404 returned error can't find the container with id 3801681d2c81373da76b0f993bbde3045c314068ad594bf7860f02057b774e8e Feb 23 13:21:47.552338 master-0 kubenswrapper[17411]: I0223 13:21:47.552214 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:21:48.144384 master-0 kubenswrapper[17411]: I0223 13:21:48.144323 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdf5bf6fc-ws9gr" event={"ID":"1c0c0578-9329-492f-9453-9503d4007aa3","Type":"ContainerStarted","Data":"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18"} Feb 23 13:21:48.144384 master-0 kubenswrapper[17411]: I0223 13:21:48.144388 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdf5bf6fc-ws9gr" event={"ID":"1c0c0578-9329-492f-9453-9503d4007aa3","Type":"ContainerStarted","Data":"7c1bc202949f7cae9f66b14a83c9bff346d77ad8f376cd40e1db2449cd741fc1"} Feb 23 13:21:48.146624 master-0 kubenswrapper[17411]: I0223 13:21:48.146536 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" event={"ID":"47ec3a69-3b8c-4ef8-8458-a864f12c1536","Type":"ContainerStarted","Data":"3801681d2c81373da76b0f993bbde3045c314068ad594bf7860f02057b774e8e"} Feb 23 13:21:48.169718 master-0 kubenswrapper[17411]: I0223 13:21:48.169640 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cdf5bf6fc-ws9gr" podStartSLOduration=2.169619067 podStartE2EDuration="2.169619067s" podCreationTimestamp="2026-02-23 13:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:48.1683268 +0000 UTC m=+901.595833497" watchObservedRunningTime="2026-02-23 13:21:48.169619067 +0000 UTC m=+901.597125654" Feb 23 13:21:49.162504 master-0 kubenswrapper[17411]: I0223 13:21:49.162169 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" event={"ID":"47ec3a69-3b8c-4ef8-8458-a864f12c1536","Type":"ContainerStarted","Data":"71315f2e1294e1e6969b3c0cd2bf7a47f1270903eb1994d7c8f55ac9ddecd186"} Feb 23 13:21:49.186213 master-0 kubenswrapper[17411]: I0223 13:21:49.185162 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-kcxdv" podStartSLOduration=2.87553684 podStartE2EDuration="4.185142948s" podCreationTimestamp="2026-02-23 13:21:45 +0000 UTC" firstStartedPulling="2026-02-23 13:21:47.413155189 +0000 UTC m=+900.840661786" lastFinishedPulling="2026-02-23 13:21:48.722761297 +0000 UTC m=+902.150267894" observedRunningTime="2026-02-23 13:21:49.184655154 +0000 UTC m=+902.612161801" watchObservedRunningTime="2026-02-23 13:21:49.185142948 +0000 UTC m=+902.612649545" Feb 23 13:21:49.395641 master-0 kubenswrapper[17411]: E0223 13:21:49.395558 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:49.397363 master-0 kubenswrapper[17411]: E0223 13:21:49.397324 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:49.398506 master-0 kubenswrapper[17411]: E0223 13:21:49.398462 17411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 23 13:21:49.398585 master-0 kubenswrapper[17411]: E0223 13:21:49.398504 17411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:21:52.916063 master-0 kubenswrapper[17411]: I0223 13:21:52.916010 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-46dxp_5ed5ee95-4638-4512-abb9-efad2f49dc19/kube-multus-additional-cni-plugins/0.log" Feb 23 13:21:52.917355 master-0 kubenswrapper[17411]: I0223 13:21:52.916089 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:53.080514 master-0 kubenswrapper[17411]: I0223 13:21:53.080279 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready\") pod \"5ed5ee95-4638-4512-abb9-efad2f49dc19\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " Feb 23 13:21:53.080514 master-0 kubenswrapper[17411]: I0223 13:21:53.080354 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr465\" (UniqueName: \"kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465\") pod \"5ed5ee95-4638-4512-abb9-efad2f49dc19\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " Feb 23 13:21:53.080514 master-0 kubenswrapper[17411]: I0223 13:21:53.080449 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist\") pod \"5ed5ee95-4638-4512-abb9-efad2f49dc19\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " Feb 23 13:21:53.080899 master-0 kubenswrapper[17411]: I0223 13:21:53.080539 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir\") pod \"5ed5ee95-4638-4512-abb9-efad2f49dc19\" (UID: \"5ed5ee95-4638-4512-abb9-efad2f49dc19\") " Feb 23 13:21:53.080899 master-0 kubenswrapper[17411]: I0223 13:21:53.080734 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready" (OuterVolumeSpecName: "ready") pod "5ed5ee95-4638-4512-abb9-efad2f49dc19" (UID: "5ed5ee95-4638-4512-abb9-efad2f49dc19"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:53.080899 master-0 kubenswrapper[17411]: I0223 13:21:53.080731 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "5ed5ee95-4638-4512-abb9-efad2f49dc19" (UID: "5ed5ee95-4638-4512-abb9-efad2f49dc19"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:21:53.081222 master-0 kubenswrapper[17411]: I0223 13:21:53.081192 17411 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ed5ee95-4638-4512-abb9-efad2f49dc19-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:53.081222 master-0 kubenswrapper[17411]: I0223 13:21:53.081213 17411 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5ed5ee95-4638-4512-abb9-efad2f49dc19-ready\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:53.081595 master-0 kubenswrapper[17411]: I0223 13:21:53.081521 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "5ed5ee95-4638-4512-abb9-efad2f49dc19" (UID: "5ed5ee95-4638-4512-abb9-efad2f49dc19"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:53.084807 master-0 kubenswrapper[17411]: I0223 13:21:53.084737 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465" (OuterVolumeSpecName: "kube-api-access-kr465") pod "5ed5ee95-4638-4512-abb9-efad2f49dc19" (UID: "5ed5ee95-4638-4512-abb9-efad2f49dc19"). InnerVolumeSpecName "kube-api-access-kr465". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:53.183069 master-0 kubenswrapper[17411]: I0223 13:21:53.182995 17411 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ed5ee95-4638-4512-abb9-efad2f49dc19-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:53.183069 master-0 kubenswrapper[17411]: I0223 13:21:53.183049 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr465\" (UniqueName: \"kubernetes.io/projected/5ed5ee95-4638-4512-abb9-efad2f49dc19-kube-api-access-kr465\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:53.200702 master-0 kubenswrapper[17411]: I0223 13:21:53.200600 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-46dxp_5ed5ee95-4638-4512-abb9-efad2f49dc19/kube-multus-additional-cni-plugins/0.log" Feb 23 13:21:53.200955 master-0 kubenswrapper[17411]: I0223 13:21:53.200690 17411 generic.go:334] "Generic (PLEG): container finished" podID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" exitCode=137 Feb 23 13:21:53.200955 master-0 kubenswrapper[17411]: I0223 13:21:53.200753 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" event={"ID":"5ed5ee95-4638-4512-abb9-efad2f49dc19","Type":"ContainerDied","Data":"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d"} Feb 23 13:21:53.200955 master-0 kubenswrapper[17411]: I0223 13:21:53.200819 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" event={"ID":"5ed5ee95-4638-4512-abb9-efad2f49dc19","Type":"ContainerDied","Data":"4e9c9ccddfe80c8d8f0111a71b970a11c0b8efc0d3cff8734f6c98541b7874e0"} Feb 23 13:21:53.200955 master-0 kubenswrapper[17411]: I0223 13:21:53.200845 17411 scope.go:117] "RemoveContainer" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" Feb 23 13:21:53.200955 master-0 kubenswrapper[17411]: I0223 13:21:53.200859 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46dxp" Feb 23 13:21:53.251007 master-0 kubenswrapper[17411]: I0223 13:21:53.250948 17411 scope.go:117] "RemoveContainer" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" Feb 23 13:21:53.251599 master-0 kubenswrapper[17411]: E0223 13:21:53.251540 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d\": container with ID starting with e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d not found: ID does not exist" containerID="e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d" Feb 23 13:21:53.251695 master-0 kubenswrapper[17411]: I0223 13:21:53.251597 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d"} err="failed to get container status \"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d\": rpc error: code = NotFound desc = could not find container \"e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d\": container with ID starting with e7722ed0d1dca539653b614f7bb87866766bf617fac06ab75bf29cd948bc295d not found: ID does not exist" Feb 23 13:21:53.269176 master-0 kubenswrapper[17411]: I0223 13:21:53.269076 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46dxp"] Feb 23 13:21:53.275226 master-0 kubenswrapper[17411]: I0223 13:21:53.275128 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46dxp"] Feb 23 13:21:54.895769 master-0 kubenswrapper[17411]: I0223 13:21:54.895575 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" path="/var/lib/kubelet/pods/5ed5ee95-4638-4512-abb9-efad2f49dc19/volumes" Feb 23 13:21:56.102897 master-0 kubenswrapper[17411]: I0223 13:21:56.102811 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-55fc6cb76d-9jsfs" podUID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" containerName="console" containerID="cri-o://b76c9abf714dbf7f3c22da2e43433195586724aa73047a6fbf53b302a613afdd" gracePeriod=15 Feb 23 13:21:56.236130 master-0 kubenswrapper[17411]: I0223 13:21:56.236025 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55fc6cb76d-9jsfs_cf1e79bb-bc6b-4cd8-9988-0adf5b658b80/console/0.log" Feb 23 13:21:56.236667 master-0 kubenswrapper[17411]: I0223 13:21:56.236147 17411 generic.go:334] "Generic (PLEG): container finished" podID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" containerID="b76c9abf714dbf7f3c22da2e43433195586724aa73047a6fbf53b302a613afdd" exitCode=2 Feb 23 13:21:56.236667 master-0 kubenswrapper[17411]: I0223 13:21:56.236213 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fc6cb76d-9jsfs" event={"ID":"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80","Type":"ContainerDied","Data":"b76c9abf714dbf7f3c22da2e43433195586724aa73047a6fbf53b302a613afdd"} Feb 23 13:21:56.657734 master-0 kubenswrapper[17411]: I0223 13:21:56.657671 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55fc6cb76d-9jsfs_cf1e79bb-bc6b-4cd8-9988-0adf5b658b80/console/0.log" Feb 23 13:21:56.657890 master-0 kubenswrapper[17411]: I0223 13:21:56.657782 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:21:56.757606 master-0 kubenswrapper[17411]: I0223 13:21:56.757487 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.757606 master-0 kubenswrapper[17411]: I0223 13:21:56.757568 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.757606 master-0 kubenswrapper[17411]: I0223 13:21:56.757617 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.758332 master-0 kubenswrapper[17411]: I0223 13:21:56.757665 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.758332 master-0 kubenswrapper[17411]: I0223 13:21:56.757741 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pk25\" (UniqueName: \"kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.758332 master-0 kubenswrapper[17411]: I0223 13:21:56.757811 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config\") pod \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\" (UID: \"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80\") " Feb 23 13:21:56.758332 master-0 kubenswrapper[17411]: I0223 13:21:56.758315 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:56.759951 master-0 kubenswrapper[17411]: I0223 13:21:56.758304 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca" (OuterVolumeSpecName: "service-ca") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:56.759951 master-0 kubenswrapper[17411]: I0223 13:21:56.759187 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config" (OuterVolumeSpecName: "console-config") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:56.761007 master-0 kubenswrapper[17411]: I0223 13:21:56.760939 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:56.761522 master-0 kubenswrapper[17411]: I0223 13:21:56.761472 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:56.762759 master-0 kubenswrapper[17411]: I0223 13:21:56.762670 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25" (OuterVolumeSpecName: "kube-api-access-2pk25") pod "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" (UID: "cf1e79bb-bc6b-4cd8-9988-0adf5b658b80"). InnerVolumeSpecName "kube-api-access-2pk25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:56.764849 master-0 kubenswrapper[17411]: I0223 13:21:56.764696 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:56.764849 master-0 kubenswrapper[17411]: I0223 13:21:56.764744 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:56.771805 master-0 kubenswrapper[17411]: I0223 13:21:56.771752 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859862 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859940 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859953 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859963 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859972 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pk25\" (UniqueName: \"kubernetes.io/projected/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-kube-api-access-2pk25\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:56.860570 master-0 kubenswrapper[17411]: I0223 13:21:56.859982 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:57.248918 master-0 kubenswrapper[17411]: I0223 13:21:57.248843 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55fc6cb76d-9jsfs_cf1e79bb-bc6b-4cd8-9988-0adf5b658b80/console/0.log" Feb 23 13:21:57.249599 master-0 kubenswrapper[17411]: I0223 13:21:57.249055 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fc6cb76d-9jsfs" Feb 23 13:21:57.249599 master-0 kubenswrapper[17411]: I0223 13:21:57.249054 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fc6cb76d-9jsfs" event={"ID":"cf1e79bb-bc6b-4cd8-9988-0adf5b658b80","Type":"ContainerDied","Data":"ec04ba18bb3cf99facf201115b7affcd132dcb1c2d2593882ffbbfb3700d60ce"} Feb 23 13:21:57.249599 master-0 kubenswrapper[17411]: I0223 13:21:57.249158 17411 scope.go:117] "RemoveContainer" containerID="b76c9abf714dbf7f3c22da2e43433195586724aa73047a6fbf53b302a613afdd" Feb 23 13:21:57.256070 master-0 kubenswrapper[17411]: I0223 13:21:57.256010 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:21:57.326910 master-0 kubenswrapper[17411]: I0223 13:21:57.326844 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:21:57.331897 master-0 kubenswrapper[17411]: I0223 13:21:57.331852 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-55fc6cb76d-9jsfs"] Feb 23 13:21:57.352046 master-0 kubenswrapper[17411]: I0223 13:21:57.351989 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:21:57.948095 master-0 kubenswrapper[17411]: I0223 13:21:57.948017 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-78988746df-4zq9k" podUID="09988a22-4301-4f22-9dea-2b00d94d1ad4" containerName="console" containerID="cri-o://e98eee0f3da5c26fe7126c873a58156f3bdb5d3ceff34b16d94afb222a5f0f97" gracePeriod=15 Feb 23 13:21:58.258872 master-0 kubenswrapper[17411]: I0223 13:21:58.258671 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78988746df-4zq9k_09988a22-4301-4f22-9dea-2b00d94d1ad4/console/0.log" Feb 23 13:21:58.258872 master-0 kubenswrapper[17411]: I0223 13:21:58.258733 17411 generic.go:334] "Generic (PLEG): container finished" podID="09988a22-4301-4f22-9dea-2b00d94d1ad4" containerID="e98eee0f3da5c26fe7126c873a58156f3bdb5d3ceff34b16d94afb222a5f0f97" exitCode=2 Feb 23 13:21:58.258872 master-0 kubenswrapper[17411]: I0223 13:21:58.258807 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78988746df-4zq9k" event={"ID":"09988a22-4301-4f22-9dea-2b00d94d1ad4","Type":"ContainerDied","Data":"e98eee0f3da5c26fe7126c873a58156f3bdb5d3ceff34b16d94afb222a5f0f97"} Feb 23 13:21:58.448493 master-0 kubenswrapper[17411]: I0223 13:21:58.448439 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78988746df-4zq9k_09988a22-4301-4f22-9dea-2b00d94d1ad4/console/0.log" Feb 23 13:21:58.448713 master-0 kubenswrapper[17411]: I0223 13:21:58.448519 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:58.593622 master-0 kubenswrapper[17411]: I0223 13:21:58.593494 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593622 master-0 kubenswrapper[17411]: I0223 13:21:58.593558 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593622 master-0 kubenswrapper[17411]: I0223 13:21:58.593584 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593880 master-0 kubenswrapper[17411]: I0223 13:21:58.593631 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593880 master-0 kubenswrapper[17411]: I0223 13:21:58.593669 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593880 master-0 kubenswrapper[17411]: I0223 13:21:58.593729 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh2rb\" (UniqueName: \"kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.593880 master-0 kubenswrapper[17411]: I0223 13:21:58.593811 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config\") pod \"09988a22-4301-4f22-9dea-2b00d94d1ad4\" (UID: \"09988a22-4301-4f22-9dea-2b00d94d1ad4\") " Feb 23 13:21:58.594535 master-0 kubenswrapper[17411]: I0223 13:21:58.594368 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca" (OuterVolumeSpecName: "service-ca") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:58.594601 master-0 kubenswrapper[17411]: I0223 13:21:58.594541 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:58.594673 master-0 kubenswrapper[17411]: I0223 13:21:58.594630 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config" (OuterVolumeSpecName: "console-config") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:58.594821 master-0 kubenswrapper[17411]: I0223 13:21:58.594733 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:21:58.597060 master-0 kubenswrapper[17411]: I0223 13:21:58.597002 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb" (OuterVolumeSpecName: "kube-api-access-hh2rb") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "kube-api-access-hh2rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:58.597265 master-0 kubenswrapper[17411]: I0223 13:21:58.597218 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:58.598754 master-0 kubenswrapper[17411]: I0223 13:21:58.598689 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "09988a22-4301-4f22-9dea-2b00d94d1ad4" (UID: "09988a22-4301-4f22-9dea-2b00d94d1ad4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695633 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695681 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695696 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695705 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09988a22-4301-4f22-9dea-2b00d94d1ad4-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695714 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695722 17411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09988a22-4301-4f22-9dea-2b00d94d1ad4-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.695765 master-0 kubenswrapper[17411]: I0223 13:21:58.695733 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh2rb\" (UniqueName: \"kubernetes.io/projected/09988a22-4301-4f22-9dea-2b00d94d1ad4-kube-api-access-hh2rb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:21:58.883480 master-0 kubenswrapper[17411]: I0223 13:21:58.883311 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" path="/var/lib/kubelet/pods/cf1e79bb-bc6b-4cd8-9988-0adf5b658b80/volumes" Feb 23 13:21:59.272066 master-0 kubenswrapper[17411]: I0223 13:21:59.271985 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78988746df-4zq9k_09988a22-4301-4f22-9dea-2b00d94d1ad4/console/0.log" Feb 23 13:21:59.272066 master-0 kubenswrapper[17411]: I0223 13:21:59.272067 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78988746df-4zq9k" event={"ID":"09988a22-4301-4f22-9dea-2b00d94d1ad4","Type":"ContainerDied","Data":"d063debd4be7d35b15669971a393c233144c92324a2ad0c3e2d95bd920d5405a"} Feb 23 13:21:59.273153 master-0 kubenswrapper[17411]: I0223 13:21:59.272121 17411 scope.go:117] "RemoveContainer" containerID="e98eee0f3da5c26fe7126c873a58156f3bdb5d3ceff34b16d94afb222a5f0f97" Feb 23 13:21:59.273153 master-0 kubenswrapper[17411]: I0223 13:21:59.272167 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78988746df-4zq9k" Feb 23 13:21:59.309524 master-0 kubenswrapper[17411]: I0223 13:21:59.309392 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:21:59.319637 master-0 kubenswrapper[17411]: I0223 13:21:59.319430 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-78988746df-4zq9k"] Feb 23 13:22:00.885892 master-0 kubenswrapper[17411]: I0223 13:22:00.885790 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09988a22-4301-4f22-9dea-2b00d94d1ad4" path="/var/lib/kubelet/pods/09988a22-4301-4f22-9dea-2b00d94d1ad4/volumes" Feb 23 13:22:08.376134 master-0 kubenswrapper[17411]: I0223 13:22:08.376019 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b9778d748-nlz5s" podUID="276116f1-ec73-4615-9607-8f29b379ea85" containerName="console" containerID="cri-o://3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0" gracePeriod=15 Feb 23 13:22:09.027355 master-0 kubenswrapper[17411]: I0223 13:22:09.027271 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b9778d748-nlz5s_276116f1-ec73-4615-9607-8f29b379ea85/console/0.log" Feb 23 13:22:09.027667 master-0 kubenswrapper[17411]: I0223 13:22:09.027378 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:22:09.103665 master-0 kubenswrapper[17411]: I0223 13:22:09.103577 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.103665 master-0 kubenswrapper[17411]: I0223 13:22:09.103627 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.104202 master-0 kubenswrapper[17411]: I0223 13:22:09.103698 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.104202 master-0 kubenswrapper[17411]: I0223 13:22:09.103761 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.104202 master-0 kubenswrapper[17411]: I0223 13:22:09.103790 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.104202 master-0 kubenswrapper[17411]: I0223 13:22:09.103873 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.104202 master-0 kubenswrapper[17411]: I0223 13:22:09.103934 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcc9\" (UniqueName: \"kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9\") pod \"276116f1-ec73-4615-9607-8f29b379ea85\" (UID: \"276116f1-ec73-4615-9607-8f29b379ea85\") " Feb 23 13:22:09.106123 master-0 kubenswrapper[17411]: I0223 13:22:09.105401 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:09.106123 master-0 kubenswrapper[17411]: I0223 13:22:09.105558 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config" (OuterVolumeSpecName: "console-config") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:09.106123 master-0 kubenswrapper[17411]: I0223 13:22:09.105763 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:09.106123 master-0 kubenswrapper[17411]: I0223 13:22:09.105789 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca" (OuterVolumeSpecName: "service-ca") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:09.108687 master-0 kubenswrapper[17411]: I0223 13:22:09.108596 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:09.109448 master-0 kubenswrapper[17411]: I0223 13:22:09.109370 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9" (OuterVolumeSpecName: "kube-api-access-kjcc9") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "kube-api-access-kjcc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:09.109448 master-0 kubenswrapper[17411]: I0223 13:22:09.109419 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "276116f1-ec73-4615-9607-8f29b379ea85" (UID: "276116f1-ec73-4615-9607-8f29b379ea85"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:09.205903 master-0 kubenswrapper[17411]: I0223 13:22:09.205812 17411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.205903 master-0 kubenswrapper[17411]: I0223 13:22:09.205869 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.205903 master-0 kubenswrapper[17411]: I0223 13:22:09.205887 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.205903 master-0 kubenswrapper[17411]: I0223 13:22:09.205906 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcc9\" (UniqueName: \"kubernetes.io/projected/276116f1-ec73-4615-9607-8f29b379ea85-kube-api-access-kjcc9\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.205903 master-0 kubenswrapper[17411]: I0223 13:22:09.205924 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.206508 master-0 kubenswrapper[17411]: I0223 13:22:09.205944 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/276116f1-ec73-4615-9607-8f29b379ea85-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.206508 master-0 kubenswrapper[17411]: I0223 13:22:09.205960 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/276116f1-ec73-4615-9607-8f29b379ea85-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391276 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b9778d748-nlz5s_276116f1-ec73-4615-9607-8f29b379ea85/console/0.log" Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391374 17411 generic.go:334] "Generic (PLEG): container finished" podID="276116f1-ec73-4615-9607-8f29b379ea85" containerID="3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0" exitCode=2 Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391426 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b9778d748-nlz5s" event={"ID":"276116f1-ec73-4615-9607-8f29b379ea85","Type":"ContainerDied","Data":"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0"} Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391464 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b9778d748-nlz5s" Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391503 17411 scope.go:117] "RemoveContainer" containerID="3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0" Feb 23 13:22:09.391825 master-0 kubenswrapper[17411]: I0223 13:22:09.391484 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b9778d748-nlz5s" event={"ID":"276116f1-ec73-4615-9607-8f29b379ea85","Type":"ContainerDied","Data":"f3bb8e3ca9fe67de07db523f52ba8fb40c4b66df886daa376e0632e91c11d585"} Feb 23 13:22:09.417734 master-0 kubenswrapper[17411]: I0223 13:22:09.415788 17411 scope.go:117] "RemoveContainer" containerID="3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0" Feb 23 13:22:09.417734 master-0 kubenswrapper[17411]: E0223 13:22:09.416365 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0\": container with ID starting with 3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0 not found: ID does not exist" containerID="3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0" Feb 23 13:22:09.417734 master-0 kubenswrapper[17411]: I0223 13:22:09.416419 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0"} err="failed to get container status \"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0\": rpc error: code = NotFound desc = could not find container \"3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0\": container with ID starting with 3aa5020e1eed5ef27b4efecdd62d24a0ebbdc2d69a7956abeb712e6852cf65e0 not found: ID does not exist" Feb 23 13:22:09.458970 master-0 kubenswrapper[17411]: I0223 13:22:09.458852 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:22:09.468584 master-0 kubenswrapper[17411]: I0223 13:22:09.468453 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b9778d748-nlz5s"] Feb 23 13:22:10.887213 master-0 kubenswrapper[17411]: I0223 13:22:10.887134 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276116f1-ec73-4615-9607-8f29b379ea85" path="/var/lib/kubelet/pods/276116f1-ec73-4615-9607-8f29b379ea85/volumes" Feb 23 13:22:22.407976 master-0 kubenswrapper[17411]: I0223 13:22:22.407870 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bbdbf64dd-7jcx8" podUID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" containerName="console" containerID="cri-o://f2b8eb4a6b96999453be22eb34e81205b38cdebc80739719b0d7581c55022473" gracePeriod=15 Feb 23 13:22:22.539431 master-0 kubenswrapper[17411]: I0223 13:22:22.539372 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bbdbf64dd-7jcx8_d3a25543-83b2-444a-955f-5c0cc8ee65ec/console/0.log" Feb 23 13:22:22.539523 master-0 kubenswrapper[17411]: I0223 13:22:22.539477 17411 generic.go:334] "Generic (PLEG): container finished" podID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" containerID="f2b8eb4a6b96999453be22eb34e81205b38cdebc80739719b0d7581c55022473" exitCode=2 Feb 23 13:22:22.539576 master-0 kubenswrapper[17411]: I0223 13:22:22.539552 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bbdbf64dd-7jcx8" event={"ID":"d3a25543-83b2-444a-955f-5c0cc8ee65ec","Type":"ContainerDied","Data":"f2b8eb4a6b96999453be22eb34e81205b38cdebc80739719b0d7581c55022473"} Feb 23 13:22:22.892435 master-0 kubenswrapper[17411]: I0223 13:22:22.892381 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bbdbf64dd-7jcx8_d3a25543-83b2-444a-955f-5c0cc8ee65ec/console/0.log" Feb 23 13:22:22.892664 master-0 kubenswrapper[17411]: I0223 13:22:22.892525 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:22:22.954882 master-0 kubenswrapper[17411]: I0223 13:22:22.954822 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955136 master-0 kubenswrapper[17411]: I0223 13:22:22.954947 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955136 master-0 kubenswrapper[17411]: I0223 13:22:22.954980 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955136 master-0 kubenswrapper[17411]: I0223 13:22:22.955058 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955136 master-0 kubenswrapper[17411]: I0223 13:22:22.955091 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955136 master-0 kubenswrapper[17411]: I0223 13:22:22.955115 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955524 master-0 kubenswrapper[17411]: I0223 13:22:22.955156 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dgzd\" (UniqueName: \"kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd\") pod \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\" (UID: \"d3a25543-83b2-444a-955f-5c0cc8ee65ec\") " Feb 23 13:22:22.955591 master-0 kubenswrapper[17411]: I0223 13:22:22.955536 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:22.956114 master-0 kubenswrapper[17411]: I0223 13:22:22.956071 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca" (OuterVolumeSpecName: "service-ca") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:22.957609 master-0 kubenswrapper[17411]: I0223 13:22:22.957545 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config" (OuterVolumeSpecName: "console-config") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:22.957751 master-0 kubenswrapper[17411]: I0223 13:22:22.957567 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:22.959096 master-0 kubenswrapper[17411]: I0223 13:22:22.959027 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd" (OuterVolumeSpecName: "kube-api-access-4dgzd") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "kube-api-access-4dgzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:22.959727 master-0 kubenswrapper[17411]: I0223 13:22:22.959680 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:22.962118 master-0 kubenswrapper[17411]: I0223 13:22:22.961517 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d3a25543-83b2-444a-955f-5c0cc8ee65ec" (UID: "d3a25543-83b2-444a-955f-5c0cc8ee65ec"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:23.057263 master-0 kubenswrapper[17411]: I0223 13:22:23.057157 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057263 master-0 kubenswrapper[17411]: I0223 13:22:23.057219 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057263 master-0 kubenswrapper[17411]: I0223 13:22:23.057260 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057964 master-0 kubenswrapper[17411]: I0223 13:22:23.057281 17411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057964 master-0 kubenswrapper[17411]: I0223 13:22:23.057331 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057964 master-0 kubenswrapper[17411]: I0223 13:22:23.057345 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dgzd\" (UniqueName: \"kubernetes.io/projected/d3a25543-83b2-444a-955f-5c0cc8ee65ec-kube-api-access-4dgzd\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.057964 master-0 kubenswrapper[17411]: I0223 13:22:23.057356 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a25543-83b2-444a-955f-5c0cc8ee65ec-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:22:23.551700 master-0 kubenswrapper[17411]: I0223 13:22:23.551611 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bbdbf64dd-7jcx8_d3a25543-83b2-444a-955f-5c0cc8ee65ec/console/0.log" Feb 23 13:22:23.551700 master-0 kubenswrapper[17411]: I0223 13:22:23.551708 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bbdbf64dd-7jcx8" event={"ID":"d3a25543-83b2-444a-955f-5c0cc8ee65ec","Type":"ContainerDied","Data":"7f944120f6edbf7e69ddb386f189836910453375c696b43ba2fce2312bfc2fe9"} Feb 23 13:22:23.552529 master-0 kubenswrapper[17411]: I0223 13:22:23.551772 17411 scope.go:117] "RemoveContainer" containerID="f2b8eb4a6b96999453be22eb34e81205b38cdebc80739719b0d7581c55022473" Feb 23 13:22:23.552529 master-0 kubenswrapper[17411]: I0223 13:22:23.551822 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bbdbf64dd-7jcx8" Feb 23 13:22:23.592036 master-0 kubenswrapper[17411]: I0223 13:22:23.591974 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:22:23.600725 master-0 kubenswrapper[17411]: I0223 13:22:23.600669 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bbdbf64dd-7jcx8"] Feb 23 13:22:24.884636 master-0 kubenswrapper[17411]: I0223 13:22:24.884543 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" path="/var/lib/kubelet/pods/d3a25543-83b2-444a-955f-5c0cc8ee65ec/volumes" Feb 23 13:22:34.969555 master-0 kubenswrapper[17411]: I0223 13:22:34.969473 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:22:42.550884 master-0 kubenswrapper[17411]: I0223 13:22:42.550821 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:22:42.596003 master-0 kubenswrapper[17411]: I0223 13:22:42.595937 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:22:42.751477 master-0 kubenswrapper[17411]: I0223 13:22:42.751359 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 13:22:47.594333 master-0 kubenswrapper[17411]: I0223 13:22:47.594207 17411 scope.go:117] "RemoveContainer" containerID="1b5f99f63dd002feaf41abedc78477cbb67500c7fee6071e3fdb7a32dbad49a8" Feb 23 13:22:47.624294 master-0 kubenswrapper[17411]: I0223 13:22:47.624163 17411 scope.go:117] "RemoveContainer" containerID="42cdeb8b7eb8c28b7cf71798320b73487eab2a374dc84ef2d6218c3ff6c02e03" Feb 23 13:23:00.012500 master-0 kubenswrapper[17411]: I0223 13:23:00.012379 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" podUID="240a114d-1fb4-4787-a56d-820006dd7888" containerName="oauth-openshift" containerID="cri-o://482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50" gracePeriod=15 Feb 23 13:23:00.034660 master-0 kubenswrapper[17411]: I0223 13:23:00.034575 17411 patch_prober.go:28] interesting pod/oauth-openshift-6d4766ffb-ff98d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.91:6443/healthz\": dial tcp 10.128.0.91:6443: connect: connection refused" start-of-body= Feb 23 13:23:00.034828 master-0 kubenswrapper[17411]: I0223 13:23:00.034698 17411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" podUID="240a114d-1fb4-4787-a56d-820006dd7888" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.91:6443/healthz\": dial tcp 10.128.0.91:6443: connect: connection refused" Feb 23 13:23:00.487221 master-0 kubenswrapper[17411]: I0223 13:23:00.487095 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523380 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523436 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523472 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523525 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523576 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523613 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.523845 17411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/240a114d-1fb4-4787-a56d-820006dd7888-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.524014 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.524378 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:23:00.525374 master-0 kubenswrapper[17411]: I0223 13:23:00.524606 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:23:00.526672 master-0 kubenswrapper[17411]: I0223 13:23:00.526526 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.532411 master-0 kubenswrapper[17411]: I0223 13:23:00.532371 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2"] Feb 23 13:23:00.532691 master-0 kubenswrapper[17411]: E0223 13:23:00.532659 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276116f1-ec73-4615-9607-8f29b379ea85" containerName="console" Feb 23 13:23:00.532691 master-0 kubenswrapper[17411]: I0223 13:23:00.532684 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="276116f1-ec73-4615-9607-8f29b379ea85" containerName="console" Feb 23 13:23:00.532759 master-0 kubenswrapper[17411]: E0223 13:23:00.532715 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" containerName="console" Feb 23 13:23:00.532759 master-0 kubenswrapper[17411]: I0223 13:23:00.532723 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" containerName="console" Feb 23 13:23:00.532759 master-0 kubenswrapper[17411]: E0223 13:23:00.532739 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09988a22-4301-4f22-9dea-2b00d94d1ad4" containerName="console" Feb 23 13:23:00.532759 master-0 kubenswrapper[17411]: I0223 13:23:00.532747 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="09988a22-4301-4f22-9dea-2b00d94d1ad4" containerName="console" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: E0223 13:23:00.532768 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: I0223 13:23:00.532778 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: E0223 13:23:00.532792 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240a114d-1fb4-4787-a56d-820006dd7888" containerName="oauth-openshift" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: I0223 13:23:00.532799 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="240a114d-1fb4-4787-a56d-820006dd7888" containerName="oauth-openshift" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: E0223 13:23:00.532811 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" containerName="console" Feb 23 13:23:00.532900 master-0 kubenswrapper[17411]: I0223 13:23:00.532819 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" containerName="console" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.532978 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1e79bb-bc6b-4cd8-9988-0adf5b658b80" containerName="console" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.532996 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="276116f1-ec73-4615-9607-8f29b379ea85" containerName="console" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.533005 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="240a114d-1fb4-4787-a56d-820006dd7888" containerName="oauth-openshift" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.533037 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a25543-83b2-444a-955f-5c0cc8ee65ec" containerName="console" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.533054 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="09988a22-4301-4f22-9dea-2b00d94d1ad4" containerName="console" Feb 23 13:23:00.533093 master-0 kubenswrapper[17411]: I0223 13:23:00.533067 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed5ee95-4638-4512-abb9-efad2f49dc19" containerName="kube-multus-additional-cni-plugins" Feb 23 13:23:00.533648 master-0 kubenswrapper[17411]: I0223 13:23:00.533568 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.544308 master-0 kubenswrapper[17411]: I0223 13:23:00.544166 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2"] Feb 23 13:23:00.625136 master-0 kubenswrapper[17411]: I0223 13:23:00.625071 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.625541 master-0 kubenswrapper[17411]: I0223 13:23:00.625158 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.625541 master-0 kubenswrapper[17411]: I0223 13:23:00.625418 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.625541 master-0 kubenswrapper[17411]: I0223 13:23:00.625483 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.625541 master-0 kubenswrapper[17411]: I0223 13:23:00.625511 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.625740 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.625808 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82zkl\" (UniqueName: \"kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.625853 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert\") pod \"240a114d-1fb4-4787-a56d-820006dd7888\" (UID: \"240a114d-1fb4-4787-a56d-820006dd7888\") " Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626010 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626047 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-error\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626027 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626091 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626151 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-policies\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626280 master-0 kubenswrapper[17411]: I0223 13:23:00.626221 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-dir\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626296 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-login\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626330 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626368 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxhdg\" (UniqueName: \"kubernetes.io/projected/2279db25-bb89-4b25-a863-9a887a0a31a5-kube-api-access-sxhdg\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626419 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626565 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626695 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626762 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.626998 master-0 kubenswrapper[17411]: I0223 13:23:00.626938 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-session\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.627398 master-0 kubenswrapper[17411]: I0223 13:23:00.627051 17411 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.627398 master-0 kubenswrapper[17411]: I0223 13:23:00.627069 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.627398 master-0 kubenswrapper[17411]: I0223 13:23:00.627082 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.627398 master-0 kubenswrapper[17411]: I0223 13:23:00.627092 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.627398 master-0 kubenswrapper[17411]: I0223 13:23:00.627101 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.628945 master-0 kubenswrapper[17411]: I0223 13:23:00.628906 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.629005 master-0 kubenswrapper[17411]: I0223 13:23:00.628988 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.629440 master-0 kubenswrapper[17411]: I0223 13:23:00.629366 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl" (OuterVolumeSpecName: "kube-api-access-82zkl") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "kube-api-access-82zkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:23:00.629440 master-0 kubenswrapper[17411]: I0223 13:23:00.629410 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.629893 master-0 kubenswrapper[17411]: I0223 13:23:00.629859 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.630840 master-0 kubenswrapper[17411]: I0223 13:23:00.630789 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.631472 master-0 kubenswrapper[17411]: I0223 13:23:00.631376 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "240a114d-1fb4-4787-a56d-820006dd7888" (UID: "240a114d-1fb4-4787-a56d-820006dd7888"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:23:00.728006 master-0 kubenswrapper[17411]: I0223 13:23:00.727926 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-dir\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728400 master-0 kubenswrapper[17411]: I0223 13:23:00.728107 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-dir\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728400 master-0 kubenswrapper[17411]: I0223 13:23:00.728159 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-login\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728400 master-0 kubenswrapper[17411]: I0223 13:23:00.728327 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728400 master-0 kubenswrapper[17411]: I0223 13:23:00.728387 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxhdg\" (UniqueName: \"kubernetes.io/projected/2279db25-bb89-4b25-a863-9a887a0a31a5-kube-api-access-sxhdg\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728616 master-0 kubenswrapper[17411]: I0223 13:23:00.728458 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728616 master-0 kubenswrapper[17411]: I0223 13:23:00.728505 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728616 master-0 kubenswrapper[17411]: I0223 13:23:00.728553 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.728616 master-0 kubenswrapper[17411]: I0223 13:23:00.728614 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729073 master-0 kubenswrapper[17411]: I0223 13:23:00.728976 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-session\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729379 master-0 kubenswrapper[17411]: I0223 13:23:00.729327 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729564 master-0 kubenswrapper[17411]: I0223 13:23:00.729398 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-error\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729623 master-0 kubenswrapper[17411]: I0223 13:23:00.729470 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-policies\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729681 master-0 kubenswrapper[17411]: I0223 13:23:00.729619 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.729834 master-0 kubenswrapper[17411]: I0223 13:23:00.729778 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729834 master-0 kubenswrapper[17411]: I0223 13:23:00.729811 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729834 master-0 kubenswrapper[17411]: I0223 13:23:00.729829 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729976 master-0 kubenswrapper[17411]: I0223 13:23:00.729851 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729976 master-0 kubenswrapper[17411]: I0223 13:23:00.729868 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82zkl\" (UniqueName: \"kubernetes.io/projected/240a114d-1fb4-4787-a56d-820006dd7888-kube-api-access-82zkl\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729976 master-0 kubenswrapper[17411]: I0223 13:23:00.729884 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.729976 master-0 kubenswrapper[17411]: I0223 13:23:00.729901 17411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/240a114d-1fb4-4787-a56d-820006dd7888-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 23 13:23:00.730142 master-0 kubenswrapper[17411]: I0223 13:23:00.729988 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.730302 master-0 kubenswrapper[17411]: I0223 13:23:00.730238 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-audit-policies\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.730906 master-0 kubenswrapper[17411]: I0223 13:23:00.730859 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.731997 master-0 kubenswrapper[17411]: I0223 13:23:00.731496 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.733620 master-0 kubenswrapper[17411]: I0223 13:23:00.733531 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.734167 master-0 kubenswrapper[17411]: I0223 13:23:00.734121 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.735785 master-0 kubenswrapper[17411]: I0223 13:23:00.734659 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.735785 master-0 kubenswrapper[17411]: I0223 13:23:00.734870 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.735785 master-0 kubenswrapper[17411]: I0223 13:23:00.734910 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-system-session\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.735785 master-0 kubenswrapper[17411]: I0223 13:23:00.735221 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-login\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.736695 master-0 kubenswrapper[17411]: I0223 13:23:00.736662 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2279db25-bb89-4b25-a863-9a887a0a31a5-v4-0-config-user-template-error\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.754176 master-0 kubenswrapper[17411]: I0223 13:23:00.754114 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxhdg\" (UniqueName: \"kubernetes.io/projected/2279db25-bb89-4b25-a863-9a887a0a31a5-kube-api-access-sxhdg\") pod \"oauth-openshift-5dfcdd9d9c-vdfd2\" (UID: \"2279db25-bb89-4b25-a863-9a887a0a31a5\") " pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.882213 master-0 kubenswrapper[17411]: I0223 13:23:00.882028 17411 generic.go:334] "Generic (PLEG): container finished" podID="240a114d-1fb4-4787-a56d-820006dd7888" containerID="482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50" exitCode=0 Feb 23 13:23:00.882213 master-0 kubenswrapper[17411]: I0223 13:23:00.882083 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" Feb 23 13:23:00.882213 master-0 kubenswrapper[17411]: I0223 13:23:00.882102 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" event={"ID":"240a114d-1fb4-4787-a56d-820006dd7888","Type":"ContainerDied","Data":"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50"} Feb 23 13:23:00.882795 master-0 kubenswrapper[17411]: I0223 13:23:00.882225 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4766ffb-ff98d" event={"ID":"240a114d-1fb4-4787-a56d-820006dd7888","Type":"ContainerDied","Data":"de5d07253d09cb464857bea6c2cd82cbeba1dcd3d21233f2bf6179403ca8acf2"} Feb 23 13:23:00.882795 master-0 kubenswrapper[17411]: I0223 13:23:00.882301 17411 scope.go:117] "RemoveContainer" containerID="482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50" Feb 23 13:23:00.883151 master-0 kubenswrapper[17411]: I0223 13:23:00.882794 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:00.931129 master-0 kubenswrapper[17411]: I0223 13:23:00.931078 17411 scope.go:117] "RemoveContainer" containerID="482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50" Feb 23 13:23:00.932946 master-0 kubenswrapper[17411]: E0223 13:23:00.932894 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50\": container with ID starting with 482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50 not found: ID does not exist" containerID="482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50" Feb 23 13:23:00.933055 master-0 kubenswrapper[17411]: I0223 13:23:00.932951 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50"} err="failed to get container status \"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50\": rpc error: code = NotFound desc = could not find container \"482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50\": container with ID starting with 482a97cdecff2322d29de44a5e60cafe8588c0d5428772d82bae5e3a03a55a50 not found: ID does not exist" Feb 23 13:23:00.941393 master-0 kubenswrapper[17411]: I0223 13:23:00.941324 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:23:00.948061 master-0 kubenswrapper[17411]: I0223 13:23:00.947986 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6d4766ffb-ff98d"] Feb 23 13:23:01.303393 master-0 kubenswrapper[17411]: I0223 13:23:01.303338 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2"] Feb 23 13:23:01.890874 master-0 kubenswrapper[17411]: I0223 13:23:01.890717 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" event={"ID":"2279db25-bb89-4b25-a863-9a887a0a31a5","Type":"ContainerStarted","Data":"0bdd4817c2c93efb54c54c74b9b231ebbbb73db523f6b4844636aac53e504f7a"} Feb 23 13:23:01.890874 master-0 kubenswrapper[17411]: I0223 13:23:01.890808 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" event={"ID":"2279db25-bb89-4b25-a863-9a887a0a31a5","Type":"ContainerStarted","Data":"1fd1ffd7e06f611d2c2c00b2d76ef30227abb198ff3b0577c853c90d9a677c7e"} Feb 23 13:23:01.919579 master-0 kubenswrapper[17411]: I0223 13:23:01.919473 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" podStartSLOduration=17.919451396 podStartE2EDuration="17.919451396s" podCreationTimestamp="2026-02-23 13:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:01.916557674 +0000 UTC m=+975.344064281" watchObservedRunningTime="2026-02-23 13:23:01.919451396 +0000 UTC m=+975.346958003" Feb 23 13:23:02.879769 master-0 kubenswrapper[17411]: I0223 13:23:02.879668 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240a114d-1fb4-4787-a56d-820006dd7888" path="/var/lib/kubelet/pods/240a114d-1fb4-4787-a56d-820006dd7888/volumes" Feb 23 13:23:02.901845 master-0 kubenswrapper[17411]: I0223 13:23:02.901779 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:23:02.908419 master-0 kubenswrapper[17411]: I0223 13:23:02.908353 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dfcdd9d9c-vdfd2" Feb 23 13:24:27.662976 master-0 kubenswrapper[17411]: I0223 13:24:27.662916 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5"] Feb 23 13:24:27.665110 master-0 kubenswrapper[17411]: I0223 13:24:27.665074 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.679393 master-0 kubenswrapper[17411]: I0223 13:24:27.679315 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5"] Feb 23 13:24:27.699640 master-0 kubenswrapper[17411]: I0223 13:24:27.699558 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.699909 master-0 kubenswrapper[17411]: I0223 13:24:27.699728 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.699909 master-0 kubenswrapper[17411]: I0223 13:24:27.699855 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlfrp\" (UniqueName: \"kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.801052 master-0 kubenswrapper[17411]: I0223 13:24:27.800972 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.801052 master-0 kubenswrapper[17411]: I0223 13:24:27.801050 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.801352 master-0 kubenswrapper[17411]: I0223 13:24:27.801080 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlfrp\" (UniqueName: \"kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.801577 master-0 kubenswrapper[17411]: I0223 13:24:27.801543 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.801668 master-0 kubenswrapper[17411]: I0223 13:24:27.801614 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.817162 master-0 kubenswrapper[17411]: I0223 13:24:27.817119 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlfrp\" (UniqueName: \"kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:27.987639 master-0 kubenswrapper[17411]: I0223 13:24:27.987540 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:28.480916 master-0 kubenswrapper[17411]: I0223 13:24:28.480839 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5"] Feb 23 13:24:28.739341 master-0 kubenswrapper[17411]: I0223 13:24:28.739149 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerStarted","Data":"c00779d532e97987effb0bd510235e6daf5f6b2f625201aac9a5d625ccc30a03"} Feb 23 13:24:28.739341 master-0 kubenswrapper[17411]: I0223 13:24:28.739213 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerStarted","Data":"c85a3511dde084bdb5db89fa73b95f49818249bc4c310a4bfb680911871fe6ab"} Feb 23 13:24:29.749749 master-0 kubenswrapper[17411]: I0223 13:24:29.749608 17411 generic.go:334] "Generic (PLEG): container finished" podID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerID="c00779d532e97987effb0bd510235e6daf5f6b2f625201aac9a5d625ccc30a03" exitCode=0 Feb 23 13:24:29.749749 master-0 kubenswrapper[17411]: I0223 13:24:29.749735 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerDied","Data":"c00779d532e97987effb0bd510235e6daf5f6b2f625201aac9a5d625ccc30a03"} Feb 23 13:24:31.766333 master-0 kubenswrapper[17411]: I0223 13:24:31.766164 17411 generic.go:334] "Generic (PLEG): container finished" podID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerID="55c6d745f0e934d1ed8e8e75a4dd392b1ff48099e23f61900e3727a86a121d97" exitCode=0 Feb 23 13:24:31.766333 master-0 kubenswrapper[17411]: I0223 13:24:31.766224 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerDied","Data":"55c6d745f0e934d1ed8e8e75a4dd392b1ff48099e23f61900e3727a86a121d97"} Feb 23 13:24:32.790034 master-0 kubenswrapper[17411]: I0223 13:24:32.789931 17411 generic.go:334] "Generic (PLEG): container finished" podID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerID="59a90d3ffa0427ec85c1d0cde6c849850c1beabfcbc03b2cfb9bd59a10bba96f" exitCode=0 Feb 23 13:24:32.790670 master-0 kubenswrapper[17411]: I0223 13:24:32.790038 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerDied","Data":"59a90d3ffa0427ec85c1d0cde6c849850c1beabfcbc03b2cfb9bd59a10bba96f"} Feb 23 13:24:34.165406 master-0 kubenswrapper[17411]: I0223 13:24:34.165354 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:34.311481 master-0 kubenswrapper[17411]: I0223 13:24:34.311413 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util\") pod \"a3debc89-4a28-4608-afd5-cce4cd6856bb\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " Feb 23 13:24:34.311723 master-0 kubenswrapper[17411]: I0223 13:24:34.311590 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlfrp\" (UniqueName: \"kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp\") pod \"a3debc89-4a28-4608-afd5-cce4cd6856bb\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " Feb 23 13:24:34.311723 master-0 kubenswrapper[17411]: I0223 13:24:34.311648 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle\") pod \"a3debc89-4a28-4608-afd5-cce4cd6856bb\" (UID: \"a3debc89-4a28-4608-afd5-cce4cd6856bb\") " Feb 23 13:24:34.312699 master-0 kubenswrapper[17411]: I0223 13:24:34.312651 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle" (OuterVolumeSpecName: "bundle") pod "a3debc89-4a28-4608-afd5-cce4cd6856bb" (UID: "a3debc89-4a28-4608-afd5-cce4cd6856bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:34.317876 master-0 kubenswrapper[17411]: I0223 13:24:34.317793 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp" (OuterVolumeSpecName: "kube-api-access-mlfrp") pod "a3debc89-4a28-4608-afd5-cce4cd6856bb" (UID: "a3debc89-4a28-4608-afd5-cce4cd6856bb"). InnerVolumeSpecName "kube-api-access-mlfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:24:34.327688 master-0 kubenswrapper[17411]: I0223 13:24:34.327574 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util" (OuterVolumeSpecName: "util") pod "a3debc89-4a28-4608-afd5-cce4cd6856bb" (UID: "a3debc89-4a28-4608-afd5-cce4cd6856bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:34.413343 master-0 kubenswrapper[17411]: I0223 13:24:34.413223 17411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.413343 master-0 kubenswrapper[17411]: I0223 13:24:34.413294 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlfrp\" (UniqueName: \"kubernetes.io/projected/a3debc89-4a28-4608-afd5-cce4cd6856bb-kube-api-access-mlfrp\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.413343 master-0 kubenswrapper[17411]: I0223 13:24:34.413315 17411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3debc89-4a28-4608-afd5-cce4cd6856bb-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:34.819721 master-0 kubenswrapper[17411]: I0223 13:24:34.819621 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" event={"ID":"a3debc89-4a28-4608-afd5-cce4cd6856bb","Type":"ContainerDied","Data":"c85a3511dde084bdb5db89fa73b95f49818249bc4c310a4bfb680911871fe6ab"} Feb 23 13:24:34.819721 master-0 kubenswrapper[17411]: I0223 13:24:34.819669 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zg6d5" Feb 23 13:24:34.819721 master-0 kubenswrapper[17411]: I0223 13:24:34.819679 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85a3511dde084bdb5db89fa73b95f49818249bc4c310a4bfb680911871fe6ab" Feb 23 13:24:40.204021 master-0 kubenswrapper[17411]: I0223 13:24:40.203929 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-8b54dc669-74mwt"] Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: E0223 13:24:40.204364 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="pull" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: I0223 13:24:40.204382 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="pull" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: E0223 13:24:40.204396 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="extract" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: I0223 13:24:40.204404 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="extract" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: E0223 13:24:40.204446 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="util" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: I0223 13:24:40.204457 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="util" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: I0223 13:24:40.204657 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3debc89-4a28-4608-afd5-cce4cd6856bb" containerName="extract" Feb 23 13:24:40.205303 master-0 kubenswrapper[17411]: I0223 13:24:40.205297 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.207398 master-0 kubenswrapper[17411]: I0223 13:24:40.207328 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 23 13:24:40.207579 master-0 kubenswrapper[17411]: I0223 13:24:40.207419 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 23 13:24:40.207579 master-0 kubenswrapper[17411]: I0223 13:24:40.207543 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 23 13:24:40.208447 master-0 kubenswrapper[17411]: I0223 13:24:40.208406 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 23 13:24:40.220965 master-0 kubenswrapper[17411]: I0223 13:24:40.220890 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-8b54dc669-74mwt"] Feb 23 13:24:40.221721 master-0 kubenswrapper[17411]: I0223 13:24:40.221663 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 23 13:24:40.318989 master-0 kubenswrapper[17411]: I0223 13:24:40.318902 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-apiservice-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.319382 master-0 kubenswrapper[17411]: I0223 13:24:40.319016 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-webhook-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.319382 master-0 kubenswrapper[17411]: I0223 13:24:40.319066 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-socket-dir\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.319382 master-0 kubenswrapper[17411]: I0223 13:24:40.319137 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-metrics-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.319382 master-0 kubenswrapper[17411]: I0223 13:24:40.319167 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lrs\" (UniqueName: \"kubernetes.io/projected/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-kube-api-access-t8lrs\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.420984 master-0 kubenswrapper[17411]: I0223 13:24:40.420908 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-apiservice-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.421228 master-0 kubenswrapper[17411]: I0223 13:24:40.421061 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-webhook-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.421228 master-0 kubenswrapper[17411]: I0223 13:24:40.421133 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-socket-dir\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.421320 master-0 kubenswrapper[17411]: I0223 13:24:40.421267 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-metrics-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.421526 master-0 kubenswrapper[17411]: I0223 13:24:40.421475 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8lrs\" (UniqueName: \"kubernetes.io/projected/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-kube-api-access-t8lrs\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.422125 master-0 kubenswrapper[17411]: I0223 13:24:40.422070 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-socket-dir\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.424841 master-0 kubenswrapper[17411]: I0223 13:24:40.424798 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-apiservice-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.425465 master-0 kubenswrapper[17411]: I0223 13:24:40.425425 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-webhook-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.426565 master-0 kubenswrapper[17411]: I0223 13:24:40.426522 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-metrics-cert\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.436696 master-0 kubenswrapper[17411]: I0223 13:24:40.436659 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8lrs\" (UniqueName: \"kubernetes.io/projected/61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4-kube-api-access-t8lrs\") pod \"lvms-operator-8b54dc669-74mwt\" (UID: \"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4\") " pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:40.523672 master-0 kubenswrapper[17411]: I0223 13:24:40.523514 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:41.021616 master-0 kubenswrapper[17411]: I0223 13:24:41.019660 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-8b54dc669-74mwt"] Feb 23 13:24:41.874848 master-0 kubenswrapper[17411]: I0223 13:24:41.874800 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" event={"ID":"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4","Type":"ContainerStarted","Data":"4f06d5335ce67b4168f7e7465465bfe0cbb51bcc78aeb17a30f111cf28405286"} Feb 23 13:24:46.915355 master-0 kubenswrapper[17411]: I0223 13:24:46.915188 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" event={"ID":"61cbb6b4-ac7c-4b56-bc17-5b4b7b9a62a4","Type":"ContainerStarted","Data":"1aa68263534ede2e4d85e1c883d90d4a3276e938f060da8095c519f7193f10d4"} Feb 23 13:24:46.915992 master-0 kubenswrapper[17411]: I0223 13:24:46.915445 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:46.941931 master-0 kubenswrapper[17411]: I0223 13:24:46.941841 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" podStartSLOduration=1.473375556 podStartE2EDuration="6.941817419s" podCreationTimestamp="2026-02-23 13:24:40 +0000 UTC" firstStartedPulling="2026-02-23 13:24:41.026914044 +0000 UTC m=+1074.454420651" lastFinishedPulling="2026-02-23 13:24:46.495355907 +0000 UTC m=+1079.922862514" observedRunningTime="2026-02-23 13:24:46.937057784 +0000 UTC m=+1080.364564401" watchObservedRunningTime="2026-02-23 13:24:46.941817419 +0000 UTC m=+1080.369324026" Feb 23 13:24:47.934054 master-0 kubenswrapper[17411]: I0223 13:24:47.933991 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-8b54dc669-74mwt" Feb 23 13:24:52.147652 master-0 kubenswrapper[17411]: I0223 13:24:52.147567 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn"] Feb 23 13:24:52.159551 master-0 kubenswrapper[17411]: I0223 13:24:52.158903 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.185850 master-0 kubenswrapper[17411]: I0223 13:24:52.181275 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn"] Feb 23 13:24:52.250361 master-0 kubenswrapper[17411]: I0223 13:24:52.250306 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.250594 master-0 kubenswrapper[17411]: I0223 13:24:52.250462 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6npq\" (UniqueName: \"kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.250916 master-0 kubenswrapper[17411]: I0223 13:24:52.250850 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.352086 master-0 kubenswrapper[17411]: I0223 13:24:52.351994 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.352086 master-0 kubenswrapper[17411]: I0223 13:24:52.352068 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6npq\" (UniqueName: \"kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.352442 master-0 kubenswrapper[17411]: I0223 13:24:52.352150 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.352753 master-0 kubenswrapper[17411]: I0223 13:24:52.352707 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.352753 master-0 kubenswrapper[17411]: I0223 13:24:52.352745 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.373674 master-0 kubenswrapper[17411]: I0223 13:24:52.373615 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6npq\" (UniqueName: \"kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.489058 master-0 kubenswrapper[17411]: I0223 13:24:52.488984 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:24:52.835989 master-0 kubenswrapper[17411]: I0223 13:24:52.835809 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42"] Feb 23 13:24:52.837563 master-0 kubenswrapper[17411]: I0223 13:24:52.837523 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:52.851048 master-0 kubenswrapper[17411]: I0223 13:24:52.850982 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42"] Feb 23 13:24:52.936161 master-0 kubenswrapper[17411]: I0223 13:24:52.936075 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn"] Feb 23 13:24:52.940953 master-0 kubenswrapper[17411]: W0223 13:24:52.940912 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a6d246_c5eb_4b70_bf56_1eb308dcc7bc.slice/crio-18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645 WatchSource:0}: Error finding container 18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645: Status 404 returned error can't find the container with id 18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645 Feb 23 13:24:52.962898 master-0 kubenswrapper[17411]: I0223 13:24:52.962825 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6l2\" (UniqueName: \"kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:52.963054 master-0 kubenswrapper[17411]: I0223 13:24:52.962922 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:52.963412 master-0 kubenswrapper[17411]: I0223 13:24:52.963351 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:52.972622 master-0 kubenswrapper[17411]: I0223 13:24:52.972550 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerStarted","Data":"18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645"} Feb 23 13:24:53.065051 master-0 kubenswrapper[17411]: I0223 13:24:53.064996 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.065196 master-0 kubenswrapper[17411]: I0223 13:24:53.065060 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.065280 master-0 kubenswrapper[17411]: I0223 13:24:53.065227 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6l2\" (UniqueName: \"kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.065844 master-0 kubenswrapper[17411]: I0223 13:24:53.065794 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.066073 master-0 kubenswrapper[17411]: I0223 13:24:53.066048 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.081229 master-0 kubenswrapper[17411]: I0223 13:24:53.081171 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6l2\" (UniqueName: \"kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.161223 master-0 kubenswrapper[17411]: I0223 13:24:53.161146 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:53.422271 master-0 kubenswrapper[17411]: I0223 13:24:53.422102 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42"] Feb 23 13:24:53.423660 master-0 kubenswrapper[17411]: I0223 13:24:53.423631 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.435309 master-0 kubenswrapper[17411]: I0223 13:24:53.435200 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42"] Feb 23 13:24:53.578779 master-0 kubenswrapper[17411]: I0223 13:24:53.578717 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.578924 master-0 kubenswrapper[17411]: I0223 13:24:53.578888 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.579066 master-0 kubenswrapper[17411]: I0223 13:24:53.579030 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-256xv\" (UniqueName: \"kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.585510 master-0 kubenswrapper[17411]: I0223 13:24:53.585467 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42"] Feb 23 13:24:53.680609 master-0 kubenswrapper[17411]: I0223 13:24:53.680476 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.680609 master-0 kubenswrapper[17411]: I0223 13:24:53.680552 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-256xv\" (UniqueName: \"kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.680853 master-0 kubenswrapper[17411]: I0223 13:24:53.680624 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.681463 master-0 kubenswrapper[17411]: I0223 13:24:53.681161 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.681915 master-0 kubenswrapper[17411]: I0223 13:24:53.681886 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.701732 master-0 kubenswrapper[17411]: I0223 13:24:53.701674 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-256xv\" (UniqueName: \"kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.808678 master-0 kubenswrapper[17411]: I0223 13:24:53.808608 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:24:53.990213 master-0 kubenswrapper[17411]: I0223 13:24:53.990138 17411 generic.go:334] "Generic (PLEG): container finished" podID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerID="24cf569a8ac8f1be04b6bc33a2811437c70f27b9f2cc436728153615d31c0131" exitCode=0 Feb 23 13:24:53.990213 master-0 kubenswrapper[17411]: I0223 13:24:53.990195 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerDied","Data":"24cf569a8ac8f1be04b6bc33a2811437c70f27b9f2cc436728153615d31c0131"} Feb 23 13:24:53.993990 master-0 kubenswrapper[17411]: I0223 13:24:53.993661 17411 generic.go:334] "Generic (PLEG): container finished" podID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerID="c3e1b7581bed3773b5245ba4a9915ca992a1445afb31b107a928d74b6c99998c" exitCode=0 Feb 23 13:24:53.993990 master-0 kubenswrapper[17411]: I0223 13:24:53.993697 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" event={"ID":"c79d2cf8-8f79-486e-9097-4fefccc77cf4","Type":"ContainerDied","Data":"c3e1b7581bed3773b5245ba4a9915ca992a1445afb31b107a928d74b6c99998c"} Feb 23 13:24:53.993990 master-0 kubenswrapper[17411]: I0223 13:24:53.993717 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" event={"ID":"c79d2cf8-8f79-486e-9097-4fefccc77cf4","Type":"ContainerStarted","Data":"5eb78f3da61c59af757c6208be494a1ff21ada33cd27869dc8846735da7ccd00"} Feb 23 13:24:54.238480 master-0 kubenswrapper[17411]: I0223 13:24:54.238436 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42"] Feb 23 13:24:54.240098 master-0 kubenswrapper[17411]: W0223 13:24:54.240059 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d57483_a537_4ebc_bf88_960ed94423df.slice/crio-db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9 WatchSource:0}: Error finding container db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9: Status 404 returned error can't find the container with id db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9 Feb 23 13:24:55.004150 master-0 kubenswrapper[17411]: I0223 13:24:55.004093 17411 generic.go:334] "Generic (PLEG): container finished" podID="65d57483-a537-4ebc-bf88-960ed94423df" containerID="d9d2458d7a73782ea5581fe7e46b101857dce55510786d576486faadbd2fbe7c" exitCode=0 Feb 23 13:24:55.004391 master-0 kubenswrapper[17411]: I0223 13:24:55.004158 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerDied","Data":"d9d2458d7a73782ea5581fe7e46b101857dce55510786d576486faadbd2fbe7c"} Feb 23 13:24:55.004391 master-0 kubenswrapper[17411]: I0223 13:24:55.004213 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerStarted","Data":"db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9"} Feb 23 13:24:56.020342 master-0 kubenswrapper[17411]: I0223 13:24:56.020280 17411 generic.go:334] "Generic (PLEG): container finished" podID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerID="fdebb15620c54248e6d1e361ba1c0420de48345f48da3fa3238d30a243051734" exitCode=0 Feb 23 13:24:56.020342 master-0 kubenswrapper[17411]: I0223 13:24:56.020346 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" event={"ID":"c79d2cf8-8f79-486e-9097-4fefccc77cf4","Type":"ContainerDied","Data":"fdebb15620c54248e6d1e361ba1c0420de48345f48da3fa3238d30a243051734"} Feb 23 13:24:57.035145 master-0 kubenswrapper[17411]: I0223 13:24:57.035011 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerStarted","Data":"4877c5f94941d3704793823d1c03621f5ea20d9755e626ddd48b3c01c1339eb3"} Feb 23 13:24:57.037519 master-0 kubenswrapper[17411]: I0223 13:24:57.037458 17411 generic.go:334] "Generic (PLEG): container finished" podID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerID="068778d6d0af1e61adfc8704aa05d33f20d022dcdf8a46fc4741503071498ef2" exitCode=0 Feb 23 13:24:57.037651 master-0 kubenswrapper[17411]: I0223 13:24:57.037555 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" event={"ID":"c79d2cf8-8f79-486e-9097-4fefccc77cf4","Type":"ContainerDied","Data":"068778d6d0af1e61adfc8704aa05d33f20d022dcdf8a46fc4741503071498ef2"} Feb 23 13:24:57.040329 master-0 kubenswrapper[17411]: I0223 13:24:57.040250 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerStarted","Data":"d0898645f97c29c0f26f7095ee7c8346dcdad3dabb14c4adfc65cdd97a57a529"} Feb 23 13:24:58.054165 master-0 kubenswrapper[17411]: I0223 13:24:58.054095 17411 generic.go:334] "Generic (PLEG): container finished" podID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerID="4877c5f94941d3704793823d1c03621f5ea20d9755e626ddd48b3c01c1339eb3" exitCode=0 Feb 23 13:24:58.054923 master-0 kubenswrapper[17411]: I0223 13:24:58.054196 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerDied","Data":"4877c5f94941d3704793823d1c03621f5ea20d9755e626ddd48b3c01c1339eb3"} Feb 23 13:24:58.059433 master-0 kubenswrapper[17411]: I0223 13:24:58.057597 17411 generic.go:334] "Generic (PLEG): container finished" podID="65d57483-a537-4ebc-bf88-960ed94423df" containerID="d0898645f97c29c0f26f7095ee7c8346dcdad3dabb14c4adfc65cdd97a57a529" exitCode=0 Feb 23 13:24:58.059433 master-0 kubenswrapper[17411]: I0223 13:24:58.057663 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerDied","Data":"d0898645f97c29c0f26f7095ee7c8346dcdad3dabb14c4adfc65cdd97a57a529"} Feb 23 13:24:58.539859 master-0 kubenswrapper[17411]: I0223 13:24:58.539774 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:58.685646 master-0 kubenswrapper[17411]: I0223 13:24:58.685194 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util\") pod \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " Feb 23 13:24:58.685646 master-0 kubenswrapper[17411]: I0223 13:24:58.685516 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle\") pod \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " Feb 23 13:24:58.685646 master-0 kubenswrapper[17411]: I0223 13:24:58.685627 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6l2\" (UniqueName: \"kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2\") pod \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\" (UID: \"c79d2cf8-8f79-486e-9097-4fefccc77cf4\") " Feb 23 13:24:58.686839 master-0 kubenswrapper[17411]: I0223 13:24:58.686753 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle" (OuterVolumeSpecName: "bundle") pod "c79d2cf8-8f79-486e-9097-4fefccc77cf4" (UID: "c79d2cf8-8f79-486e-9097-4fefccc77cf4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:58.689163 master-0 kubenswrapper[17411]: I0223 13:24:58.689049 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2" (OuterVolumeSpecName: "kube-api-access-tr6l2") pod "c79d2cf8-8f79-486e-9097-4fefccc77cf4" (UID: "c79d2cf8-8f79-486e-9097-4fefccc77cf4"). InnerVolumeSpecName "kube-api-access-tr6l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:24:58.725014 master-0 kubenswrapper[17411]: I0223 13:24:58.724926 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util" (OuterVolumeSpecName: "util") pod "c79d2cf8-8f79-486e-9097-4fefccc77cf4" (UID: "c79d2cf8-8f79-486e-9097-4fefccc77cf4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:58.787771 master-0 kubenswrapper[17411]: I0223 13:24:58.787715 17411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:58.787771 master-0 kubenswrapper[17411]: I0223 13:24:58.787760 17411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c79d2cf8-8f79-486e-9097-4fefccc77cf4-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:58.787771 master-0 kubenswrapper[17411]: I0223 13:24:58.787776 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6l2\" (UniqueName: \"kubernetes.io/projected/c79d2cf8-8f79-486e-9097-4fefccc77cf4-kube-api-access-tr6l2\") on node \"master-0\" DevicePath \"\"" Feb 23 13:24:59.070475 master-0 kubenswrapper[17411]: I0223 13:24:59.070375 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" event={"ID":"c79d2cf8-8f79-486e-9097-4fefccc77cf4","Type":"ContainerDied","Data":"5eb78f3da61c59af757c6208be494a1ff21ada33cd27869dc8846735da7ccd00"} Feb 23 13:24:59.070475 master-0 kubenswrapper[17411]: I0223 13:24:59.070456 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb78f3da61c59af757c6208be494a1ff21ada33cd27869dc8846735da7ccd00" Feb 23 13:24:59.071922 master-0 kubenswrapper[17411]: I0223 13:24:59.070491 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zvn42" Feb 23 13:24:59.076088 master-0 kubenswrapper[17411]: I0223 13:24:59.076015 17411 generic.go:334] "Generic (PLEG): container finished" podID="65d57483-a537-4ebc-bf88-960ed94423df" containerID="2cc3e76eb047df71b2f2f7e34caefbac93816104d63d2a5c78cd29f1c47d3780" exitCode=0 Feb 23 13:24:59.076272 master-0 kubenswrapper[17411]: I0223 13:24:59.076109 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerDied","Data":"2cc3e76eb047df71b2f2f7e34caefbac93816104d63d2a5c78cd29f1c47d3780"} Feb 23 13:24:59.080521 master-0 kubenswrapper[17411]: I0223 13:24:59.080485 17411 generic.go:334] "Generic (PLEG): container finished" podID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerID="ee30e38c4139487ea97242ce325d717f93e02c5d48189126f1f8737f0f84b2a6" exitCode=0 Feb 23 13:24:59.080521 master-0 kubenswrapper[17411]: I0223 13:24:59.080524 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerDied","Data":"ee30e38c4139487ea97242ce325d717f93e02c5d48189126f1f8737f0f84b2a6"} Feb 23 13:24:59.099412 master-0 kubenswrapper[17411]: I0223 13:24:59.099320 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5"] Feb 23 13:24:59.099779 master-0 kubenswrapper[17411]: E0223 13:24:59.099738 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="pull" Feb 23 13:24:59.099867 master-0 kubenswrapper[17411]: I0223 13:24:59.099770 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="pull" Feb 23 13:24:59.099867 master-0 kubenswrapper[17411]: E0223 13:24:59.099851 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="util" Feb 23 13:24:59.099867 master-0 kubenswrapper[17411]: I0223 13:24:59.099863 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="util" Feb 23 13:24:59.100186 master-0 kubenswrapper[17411]: E0223 13:24:59.099901 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="extract" Feb 23 13:24:59.100186 master-0 kubenswrapper[17411]: I0223 13:24:59.099915 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="extract" Feb 23 13:24:59.100499 master-0 kubenswrapper[17411]: I0223 13:24:59.100313 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79d2cf8-8f79-486e-9097-4fefccc77cf4" containerName="extract" Feb 23 13:24:59.102012 master-0 kubenswrapper[17411]: I0223 13:24:59.101963 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.120694 master-0 kubenswrapper[17411]: I0223 13:24:59.120639 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5"] Feb 23 13:24:59.299615 master-0 kubenswrapper[17411]: I0223 13:24:59.299523 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.299905 master-0 kubenswrapper[17411]: I0223 13:24:59.299683 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.300110 master-0 kubenswrapper[17411]: I0223 13:24:59.300042 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmf69\" (UniqueName: \"kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.401800 master-0 kubenswrapper[17411]: I0223 13:24:59.401535 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.401800 master-0 kubenswrapper[17411]: I0223 13:24:59.401686 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.402159 master-0 kubenswrapper[17411]: I0223 13:24:59.401940 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmf69\" (UniqueName: \"kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.402538 master-0 kubenswrapper[17411]: I0223 13:24:59.402470 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.402710 master-0 kubenswrapper[17411]: I0223 13:24:59.402483 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.433728 master-0 kubenswrapper[17411]: I0223 13:24:59.433641 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmf69\" (UniqueName: \"kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.436006 master-0 kubenswrapper[17411]: I0223 13:24:59.435954 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:24:59.926703 master-0 kubenswrapper[17411]: I0223 13:24:59.926626 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5"] Feb 23 13:24:59.931023 master-0 kubenswrapper[17411]: W0223 13:24:59.930943 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84891227_3eff_491a_b71f_6a5422e6bdb1.slice/crio-5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4 WatchSource:0}: Error finding container 5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4: Status 404 returned error can't find the container with id 5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4 Feb 23 13:25:00.103227 master-0 kubenswrapper[17411]: I0223 13:25:00.103166 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" event={"ID":"84891227-3eff-491a-b71f-6a5422e6bdb1","Type":"ContainerStarted","Data":"5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4"} Feb 23 13:25:00.620367 master-0 kubenswrapper[17411]: I0223 13:25:00.620292 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:25:00.625912 master-0 kubenswrapper[17411]: I0223 13:25:00.625868 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:25:00.724616 master-0 kubenswrapper[17411]: I0223 13:25:00.724519 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util\") pod \"65d57483-a537-4ebc-bf88-960ed94423df\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " Feb 23 13:25:00.724616 master-0 kubenswrapper[17411]: I0223 13:25:00.724603 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-256xv\" (UniqueName: \"kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv\") pod \"65d57483-a537-4ebc-bf88-960ed94423df\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " Feb 23 13:25:00.724885 master-0 kubenswrapper[17411]: I0223 13:25:00.724718 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle\") pod \"65d57483-a537-4ebc-bf88-960ed94423df\" (UID: \"65d57483-a537-4ebc-bf88-960ed94423df\") " Feb 23 13:25:00.725949 master-0 kubenswrapper[17411]: I0223 13:25:00.725884 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle" (OuterVolumeSpecName: "bundle") pod "65d57483-a537-4ebc-bf88-960ed94423df" (UID: "65d57483-a537-4ebc-bf88-960ed94423df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:00.727627 master-0 kubenswrapper[17411]: I0223 13:25:00.727587 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv" (OuterVolumeSpecName: "kube-api-access-256xv") pod "65d57483-a537-4ebc-bf88-960ed94423df" (UID: "65d57483-a537-4ebc-bf88-960ed94423df"). InnerVolumeSpecName "kube-api-access-256xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:00.734661 master-0 kubenswrapper[17411]: I0223 13:25:00.734608 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util" (OuterVolumeSpecName: "util") pod "65d57483-a537-4ebc-bf88-960ed94423df" (UID: "65d57483-a537-4ebc-bf88-960ed94423df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:00.826498 master-0 kubenswrapper[17411]: I0223 13:25:00.826305 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util\") pod \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " Feb 23 13:25:00.826498 master-0 kubenswrapper[17411]: I0223 13:25:00.826410 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6npq\" (UniqueName: \"kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq\") pod \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " Feb 23 13:25:00.826814 master-0 kubenswrapper[17411]: I0223 13:25:00.826569 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle\") pod \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\" (UID: \"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc\") " Feb 23 13:25:00.827288 master-0 kubenswrapper[17411]: I0223 13:25:00.827206 17411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:00.827342 master-0 kubenswrapper[17411]: I0223 13:25:00.827292 17411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65d57483-a537-4ebc-bf88-960ed94423df-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:00.827342 master-0 kubenswrapper[17411]: I0223 13:25:00.827313 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-256xv\" (UniqueName: \"kubernetes.io/projected/65d57483-a537-4ebc-bf88-960ed94423df-kube-api-access-256xv\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:00.828387 master-0 kubenswrapper[17411]: I0223 13:25:00.827747 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle" (OuterVolumeSpecName: "bundle") pod "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" (UID: "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:00.832742 master-0 kubenswrapper[17411]: I0223 13:25:00.832688 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq" (OuterVolumeSpecName: "kube-api-access-k6npq") pod "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" (UID: "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc"). InnerVolumeSpecName "kube-api-access-k6npq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:00.838281 master-0 kubenswrapper[17411]: I0223 13:25:00.838115 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util" (OuterVolumeSpecName: "util") pod "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" (UID: "a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:00.929095 master-0 kubenswrapper[17411]: I0223 13:25:00.928855 17411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:00.929095 master-0 kubenswrapper[17411]: I0223 13:25:00.929082 17411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:00.929095 master-0 kubenswrapper[17411]: I0223 13:25:00.929097 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6npq\" (UniqueName: \"kubernetes.io/projected/a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc-kube-api-access-k6npq\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:01.117786 master-0 kubenswrapper[17411]: I0223 13:25:01.117570 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" event={"ID":"a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc","Type":"ContainerDied","Data":"18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645"} Feb 23 13:25:01.117786 master-0 kubenswrapper[17411]: I0223 13:25:01.117634 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rgwsn" Feb 23 13:25:01.117786 master-0 kubenswrapper[17411]: I0223 13:25:01.117641 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d737d140288b6222e529957a703c9a7839cffba003f7a2a692d1b47e5ea645" Feb 23 13:25:01.123122 master-0 kubenswrapper[17411]: I0223 13:25:01.123079 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" event={"ID":"65d57483-a537-4ebc-bf88-960ed94423df","Type":"ContainerDied","Data":"db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9"} Feb 23 13:25:01.123229 master-0 kubenswrapper[17411]: I0223 13:25:01.123126 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4637b534b90ad7fca87705f069508778c5e34d434e0102d5f4365c70dad9a9" Feb 23 13:25:01.123229 master-0 kubenswrapper[17411]: I0223 13:25:01.123154 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4nz42" Feb 23 13:25:01.127776 master-0 kubenswrapper[17411]: I0223 13:25:01.127726 17411 generic.go:334] "Generic (PLEG): container finished" podID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerID="99cf9d40bb29451c3d06ec49d9e9235660c8002cf69926e85ef30c6b818288b4" exitCode=0 Feb 23 13:25:01.127848 master-0 kubenswrapper[17411]: I0223 13:25:01.127794 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" event={"ID":"84891227-3eff-491a-b71f-6a5422e6bdb1","Type":"ContainerDied","Data":"99cf9d40bb29451c3d06ec49d9e9235660c8002cf69926e85ef30c6b818288b4"} Feb 23 13:25:03.157211 master-0 kubenswrapper[17411]: I0223 13:25:03.157151 17411 generic.go:334] "Generic (PLEG): container finished" podID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerID="f20d9cd5f50cd4dce1029ca53b9ca158a5ce313de83ac7a33bfa8a594dc31212" exitCode=0 Feb 23 13:25:03.157814 master-0 kubenswrapper[17411]: I0223 13:25:03.157289 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" event={"ID":"84891227-3eff-491a-b71f-6a5422e6bdb1","Type":"ContainerDied","Data":"f20d9cd5f50cd4dce1029ca53b9ca158a5ce313de83ac7a33bfa8a594dc31212"} Feb 23 13:25:04.167368 master-0 kubenswrapper[17411]: I0223 13:25:04.167296 17411 generic.go:334] "Generic (PLEG): container finished" podID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerID="76b04aa310da9fb22cd3f30273b51e5c3f48f953ab7839f2bc2bafd0b96e569d" exitCode=0 Feb 23 13:25:04.167368 master-0 kubenswrapper[17411]: I0223 13:25:04.167358 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" event={"ID":"84891227-3eff-491a-b71f-6a5422e6bdb1","Type":"ContainerDied","Data":"76b04aa310da9fb22cd3f30273b51e5c3f48f953ab7839f2bc2bafd0b96e569d"} Feb 23 13:25:05.527814 master-0 kubenswrapper[17411]: I0223 13:25:05.527372 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:25:05.635076 master-0 kubenswrapper[17411]: I0223 13:25:05.634967 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmf69\" (UniqueName: \"kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69\") pod \"84891227-3eff-491a-b71f-6a5422e6bdb1\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " Feb 23 13:25:05.635076 master-0 kubenswrapper[17411]: I0223 13:25:05.635088 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle\") pod \"84891227-3eff-491a-b71f-6a5422e6bdb1\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " Feb 23 13:25:05.635484 master-0 kubenswrapper[17411]: I0223 13:25:05.635163 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util\") pod \"84891227-3eff-491a-b71f-6a5422e6bdb1\" (UID: \"84891227-3eff-491a-b71f-6a5422e6bdb1\") " Feb 23 13:25:05.643012 master-0 kubenswrapper[17411]: I0223 13:25:05.640624 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69" (OuterVolumeSpecName: "kube-api-access-rmf69") pod "84891227-3eff-491a-b71f-6a5422e6bdb1" (UID: "84891227-3eff-491a-b71f-6a5422e6bdb1"). InnerVolumeSpecName "kube-api-access-rmf69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:05.647295 master-0 kubenswrapper[17411]: I0223 13:25:05.647195 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle" (OuterVolumeSpecName: "bundle") pod "84891227-3eff-491a-b71f-6a5422e6bdb1" (UID: "84891227-3eff-491a-b71f-6a5422e6bdb1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:05.650261 master-0 kubenswrapper[17411]: I0223 13:25:05.650190 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util" (OuterVolumeSpecName: "util") pod "84891227-3eff-491a-b71f-6a5422e6bdb1" (UID: "84891227-3eff-491a-b71f-6a5422e6bdb1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:25:05.737525 master-0 kubenswrapper[17411]: I0223 13:25:05.737288 17411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:05.737525 master-0 kubenswrapper[17411]: I0223 13:25:05.737338 17411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84891227-3eff-491a-b71f-6a5422e6bdb1-util\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:05.737525 master-0 kubenswrapper[17411]: I0223 13:25:05.737352 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmf69\" (UniqueName: \"kubernetes.io/projected/84891227-3eff-491a-b71f-6a5422e6bdb1-kube-api-access-rmf69\") on node \"master-0\" DevicePath \"\"" Feb 23 13:25:06.183528 master-0 kubenswrapper[17411]: I0223 13:25:06.183378 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr"] Feb 23 13:25:06.183764 master-0 kubenswrapper[17411]: E0223 13:25:06.183724 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="extract" Feb 23 13:25:06.183764 master-0 kubenswrapper[17411]: I0223 13:25:06.183739 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="extract" Feb 23 13:25:06.183764 master-0 kubenswrapper[17411]: E0223 13:25:06.183760 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="pull" Feb 23 13:25:06.183764 master-0 kubenswrapper[17411]: I0223 13:25:06.183767 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="pull" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183779 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="util" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183789 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="util" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183803 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="extract" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183810 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="extract" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183833 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="extract" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183840 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="extract" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183852 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="pull" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183858 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="pull" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183877 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="util" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183883 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="util" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183894 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="pull" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183901 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="pull" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: E0223 13:25:06.183912 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="util" Feb 23 13:25:06.184044 master-0 kubenswrapper[17411]: I0223 13:25:06.183918 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="util" Feb 23 13:25:06.184942 master-0 kubenswrapper[17411]: I0223 13:25:06.184072 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d57483-a537-4ebc-bf88-960ed94423df" containerName="extract" Feb 23 13:25:06.184942 master-0 kubenswrapper[17411]: I0223 13:25:06.184100 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="84891227-3eff-491a-b71f-6a5422e6bdb1" containerName="extract" Feb 23 13:25:06.184942 master-0 kubenswrapper[17411]: I0223 13:25:06.184146 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a6d246-c5eb-4b70-bf56-1eb308dcc7bc" containerName="extract" Feb 23 13:25:06.184942 master-0 kubenswrapper[17411]: I0223 13:25:06.184720 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.187775 master-0 kubenswrapper[17411]: I0223 13:25:06.187375 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 23 13:25:06.188411 master-0 kubenswrapper[17411]: I0223 13:25:06.188072 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 23 13:25:06.219899 master-0 kubenswrapper[17411]: I0223 13:25:06.219832 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" event={"ID":"84891227-3eff-491a-b71f-6a5422e6bdb1","Type":"ContainerDied","Data":"5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4"} Feb 23 13:25:06.219899 master-0 kubenswrapper[17411]: I0223 13:25:06.219878 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a39a692994631a6b91eb79abd36f08e770b313cf186bc50358aab0045c86ed4" Feb 23 13:25:06.220161 master-0 kubenswrapper[17411]: I0223 13:25:06.220028 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cwwz5" Feb 23 13:25:06.254179 master-0 kubenswrapper[17411]: I0223 13:25:06.253185 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4831c865-7950-4a37-87d4-ea5b8889cf23-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.254179 master-0 kubenswrapper[17411]: I0223 13:25:06.253319 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lbs\" (UniqueName: \"kubernetes.io/projected/4831c865-7950-4a37-87d4-ea5b8889cf23-kube-api-access-52lbs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.355556 master-0 kubenswrapper[17411]: I0223 13:25:06.355416 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4831c865-7950-4a37-87d4-ea5b8889cf23-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.355846 master-0 kubenswrapper[17411]: I0223 13:25:06.355641 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52lbs\" (UniqueName: \"kubernetes.io/projected/4831c865-7950-4a37-87d4-ea5b8889cf23-kube-api-access-52lbs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.356137 master-0 kubenswrapper[17411]: I0223 13:25:06.356085 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4831c865-7950-4a37-87d4-ea5b8889cf23-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.382813 master-0 kubenswrapper[17411]: I0223 13:25:06.381543 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr"] Feb 23 13:25:06.442126 master-0 kubenswrapper[17411]: I0223 13:25:06.442080 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52lbs\" (UniqueName: \"kubernetes.io/projected/4831c865-7950-4a37-87d4-ea5b8889cf23-kube-api-access-52lbs\") pod \"cert-manager-operator-controller-manager-66c8bdd694-gf7qr\" (UID: \"4831c865-7950-4a37-87d4-ea5b8889cf23\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:06.501489 master-0 kubenswrapper[17411]: I0223 13:25:06.501407 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" Feb 23 13:25:07.036898 master-0 kubenswrapper[17411]: I0223 13:25:07.036831 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr"] Feb 23 13:25:07.042129 master-0 kubenswrapper[17411]: W0223 13:25:07.042052 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4831c865_7950_4a37_87d4_ea5b8889cf23.slice/crio-502a2f1e3a0eaadb2775be6d2734ab29a085ba5563db9055216a7c02641c7595 WatchSource:0}: Error finding container 502a2f1e3a0eaadb2775be6d2734ab29a085ba5563db9055216a7c02641c7595: Status 404 returned error can't find the container with id 502a2f1e3a0eaadb2775be6d2734ab29a085ba5563db9055216a7c02641c7595 Feb 23 13:25:07.234173 master-0 kubenswrapper[17411]: I0223 13:25:07.234082 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" event={"ID":"4831c865-7950-4a37-87d4-ea5b8889cf23","Type":"ContainerStarted","Data":"502a2f1e3a0eaadb2775be6d2734ab29a085ba5563db9055216a7c02641c7595"} Feb 23 13:25:11.270951 master-0 kubenswrapper[17411]: I0223 13:25:11.270889 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" event={"ID":"4831c865-7950-4a37-87d4-ea5b8889cf23","Type":"ContainerStarted","Data":"665f50ea5a3619540485983f6180e8cbae4776cb9c8d732050dbc3b5859982b7"} Feb 23 13:25:11.308435 master-0 kubenswrapper[17411]: I0223 13:25:11.308337 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-gf7qr" podStartSLOduration=2.083593558 podStartE2EDuration="5.30830908s" podCreationTimestamp="2026-02-23 13:25:06 +0000 UTC" firstStartedPulling="2026-02-23 13:25:07.04534723 +0000 UTC m=+1100.472853837" lastFinishedPulling="2026-02-23 13:25:10.270062762 +0000 UTC m=+1103.697569359" observedRunningTime="2026-02-23 13:25:11.304722648 +0000 UTC m=+1104.732229265" watchObservedRunningTime="2026-02-23 13:25:11.30830908 +0000 UTC m=+1104.735815677" Feb 23 13:25:15.366379 master-0 kubenswrapper[17411]: I0223 13:25:15.366312 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-57lv2"] Feb 23 13:25:15.367341 master-0 kubenswrapper[17411]: I0223 13:25:15.367308 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.369639 master-0 kubenswrapper[17411]: I0223 13:25:15.369601 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 13:25:15.371459 master-0 kubenswrapper[17411]: I0223 13:25:15.371410 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 13:25:15.401148 master-0 kubenswrapper[17411]: I0223 13:25:15.401088 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-57lv2"] Feb 23 13:25:15.444270 master-0 kubenswrapper[17411]: I0223 13:25:15.439692 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smh6z\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-kube-api-access-smh6z\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.444270 master-0 kubenswrapper[17411]: I0223 13:25:15.440017 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.542021 master-0 kubenswrapper[17411]: I0223 13:25:15.541927 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smh6z\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-kube-api-access-smh6z\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.542287 master-0 kubenswrapper[17411]: I0223 13:25:15.542074 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.572431 master-0 kubenswrapper[17411]: I0223 13:25:15.572354 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smh6z\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-kube-api-access-smh6z\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.576963 master-0 kubenswrapper[17411]: I0223 13:25:15.576912 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fcadd0c-25fd-4866-9614-c84401635199-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-57lv2\" (UID: \"5fcadd0c-25fd-4866-9614-c84401635199\") " pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:15.715620 master-0 kubenswrapper[17411]: I0223 13:25:15.715535 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" Feb 23 13:25:16.212053 master-0 kubenswrapper[17411]: I0223 13:25:16.211975 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-57lv2"] Feb 23 13:25:16.312639 master-0 kubenswrapper[17411]: I0223 13:25:16.312543 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" event={"ID":"5fcadd0c-25fd-4866-9614-c84401635199","Type":"ContainerStarted","Data":"a2ccd73b501c3abf3815851907423e1fbbda340193e745f00a96de823c35d2d8"} Feb 23 13:25:16.320157 master-0 kubenswrapper[17411]: I0223 13:25:16.320095 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jbvst"] Feb 23 13:25:16.321164 master-0 kubenswrapper[17411]: I0223 13:25:16.321128 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.347260 master-0 kubenswrapper[17411]: I0223 13:25:16.347179 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jbvst"] Feb 23 13:25:16.360703 master-0 kubenswrapper[17411]: I0223 13:25:16.360619 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzp7\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-kube-api-access-gmzp7\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.360878 master-0 kubenswrapper[17411]: I0223 13:25:16.360805 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.462741 master-0 kubenswrapper[17411]: I0223 13:25:16.462580 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmzp7\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-kube-api-access-gmzp7\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.462741 master-0 kubenswrapper[17411]: I0223 13:25:16.462702 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.478619 master-0 kubenswrapper[17411]: I0223 13:25:16.478553 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.478783 master-0 kubenswrapper[17411]: I0223 13:25:16.478646 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmzp7\" (UniqueName: \"kubernetes.io/projected/757a2325-1dbe-408d-b51f-fbde873a4a9c-kube-api-access-gmzp7\") pod \"cert-manager-webhook-6888856db4-jbvst\" (UID: \"757a2325-1dbe-408d-b51f-fbde873a4a9c\") " pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:16.638037 master-0 kubenswrapper[17411]: I0223 13:25:16.637968 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:17.083882 master-0 kubenswrapper[17411]: I0223 13:25:17.083802 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jbvst"] Feb 23 13:25:17.323520 master-0 kubenswrapper[17411]: I0223 13:25:17.323363 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" event={"ID":"757a2325-1dbe-408d-b51f-fbde873a4a9c","Type":"ContainerStarted","Data":"966e9b39e357bceec0247fd38ce6a569c4904a8b01b8357aa11ef14a0603140e"} Feb 23 13:25:17.704986 master-0 kubenswrapper[17411]: I0223 13:25:17.704268 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-n8fgd"] Feb 23 13:25:17.705670 master-0 kubenswrapper[17411]: I0223 13:25:17.705594 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" Feb 23 13:25:17.710355 master-0 kubenswrapper[17411]: I0223 13:25:17.710290 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 13:25:17.710588 master-0 kubenswrapper[17411]: I0223 13:25:17.710474 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 13:25:17.719104 master-0 kubenswrapper[17411]: I0223 13:25:17.719023 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-n8fgd"] Feb 23 13:25:17.785526 master-0 kubenswrapper[17411]: I0223 13:25:17.785465 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcsx\" (UniqueName: \"kubernetes.io/projected/c549d91c-a7a5-4d21-a2a6-75c9dd41da7c-kube-api-access-grcsx\") pod \"nmstate-operator-694c9596b7-n8fgd\" (UID: \"c549d91c-a7a5-4d21-a2a6-75c9dd41da7c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" Feb 23 13:25:17.888837 master-0 kubenswrapper[17411]: I0223 13:25:17.888260 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grcsx\" (UniqueName: \"kubernetes.io/projected/c549d91c-a7a5-4d21-a2a6-75c9dd41da7c-kube-api-access-grcsx\") pod \"nmstate-operator-694c9596b7-n8fgd\" (UID: \"c549d91c-a7a5-4d21-a2a6-75c9dd41da7c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" Feb 23 13:25:17.912137 master-0 kubenswrapper[17411]: I0223 13:25:17.912085 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grcsx\" (UniqueName: \"kubernetes.io/projected/c549d91c-a7a5-4d21-a2a6-75c9dd41da7c-kube-api-access-grcsx\") pod \"nmstate-operator-694c9596b7-n8fgd\" (UID: \"c549d91c-a7a5-4d21-a2a6-75c9dd41da7c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" Feb 23 13:25:18.037776 master-0 kubenswrapper[17411]: I0223 13:25:18.037618 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" Feb 23 13:25:18.481075 master-0 kubenswrapper[17411]: I0223 13:25:18.475329 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-n8fgd"] Feb 23 13:25:19.355431 master-0 kubenswrapper[17411]: I0223 13:25:19.355353 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" event={"ID":"c549d91c-a7a5-4d21-a2a6-75c9dd41da7c","Type":"ContainerStarted","Data":"e3b7bb395451b81f00b76c8f08504a68f17dcc996a5c209bc7aa877aba419b01"} Feb 23 13:25:22.407693 master-0 kubenswrapper[17411]: I0223 13:25:22.407568 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" event={"ID":"5fcadd0c-25fd-4866-9614-c84401635199","Type":"ContainerStarted","Data":"8a16f06a9f54954aff741051ce8798f78ee8544baf95fad01d4e87c4e1a46086"} Feb 23 13:25:22.424621 master-0 kubenswrapper[17411]: I0223 13:25:22.424548 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" event={"ID":"757a2325-1dbe-408d-b51f-fbde873a4a9c","Type":"ContainerStarted","Data":"aceb93582d22b066b6175c3a779ee2f9087c14d56f3cadce5b3533bd49d5f78f"} Feb 23 13:25:22.425005 master-0 kubenswrapper[17411]: I0223 13:25:22.424973 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:22.468232 master-0 kubenswrapper[17411]: I0223 13:25:22.468128 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" podStartSLOduration=1.5404980959999999 podStartE2EDuration="6.468108503s" podCreationTimestamp="2026-02-23 13:25:16 +0000 UTC" firstStartedPulling="2026-02-23 13:25:17.06685052 +0000 UTC m=+1110.494357117" lastFinishedPulling="2026-02-23 13:25:21.994460927 +0000 UTC m=+1115.421967524" observedRunningTime="2026-02-23 13:25:22.467926218 +0000 UTC m=+1115.895432815" watchObservedRunningTime="2026-02-23 13:25:22.468108503 +0000 UTC m=+1115.895615100" Feb 23 13:25:22.475564 master-0 kubenswrapper[17411]: I0223 13:25:22.475472 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-57lv2" podStartSLOduration=1.727744969 podStartE2EDuration="7.475447752s" podCreationTimestamp="2026-02-23 13:25:15 +0000 UTC" firstStartedPulling="2026-02-23 13:25:16.224424839 +0000 UTC m=+1109.651931436" lastFinishedPulling="2026-02-23 13:25:21.972127622 +0000 UTC m=+1115.399634219" observedRunningTime="2026-02-23 13:25:22.444509172 +0000 UTC m=+1115.872015769" watchObservedRunningTime="2026-02-23 13:25:22.475447752 +0000 UTC m=+1115.902954349" Feb 23 13:25:24.440932 master-0 kubenswrapper[17411]: I0223 13:25:24.440872 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" event={"ID":"c549d91c-a7a5-4d21-a2a6-75c9dd41da7c","Type":"ContainerStarted","Data":"1e8e18f7a04ce43ebe390f2ed112a5671384e2abbd5fcb4a451a0b212c464c79"} Feb 23 13:25:24.467806 master-0 kubenswrapper[17411]: I0223 13:25:24.467708 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-n8fgd" podStartSLOduration=1.863840081 podStartE2EDuration="7.467689012s" podCreationTimestamp="2026-02-23 13:25:17 +0000 UTC" firstStartedPulling="2026-02-23 13:25:18.486014159 +0000 UTC m=+1111.913520756" lastFinishedPulling="2026-02-23 13:25:24.08986309 +0000 UTC m=+1117.517369687" observedRunningTime="2026-02-23 13:25:24.465309425 +0000 UTC m=+1117.892816022" watchObservedRunningTime="2026-02-23 13:25:24.467689012 +0000 UTC m=+1117.895195609" Feb 23 13:25:26.171437 master-0 kubenswrapper[17411]: I0223 13:25:26.171369 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp"] Feb 23 13:25:26.172382 master-0 kubenswrapper[17411]: I0223 13:25:26.172355 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.176202 master-0 kubenswrapper[17411]: I0223 13:25:26.176146 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 13:25:26.179969 master-0 kubenswrapper[17411]: I0223 13:25:26.179775 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 13:25:26.183897 master-0 kubenswrapper[17411]: I0223 13:25:26.183727 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 13:25:26.189621 master-0 kubenswrapper[17411]: I0223 13:25:26.189570 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 13:25:26.279112 master-0 kubenswrapper[17411]: I0223 13:25:26.279034 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgcx\" (UniqueName: \"kubernetes.io/projected/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-kube-api-access-cqgcx\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.279112 master-0 kubenswrapper[17411]: I0223 13:25:26.279113 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-webhook-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.279423 master-0 kubenswrapper[17411]: I0223 13:25:26.279157 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-apiservice-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.293934 master-0 kubenswrapper[17411]: I0223 13:25:26.291436 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp"] Feb 23 13:25:26.382305 master-0 kubenswrapper[17411]: I0223 13:25:26.382204 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgcx\" (UniqueName: \"kubernetes.io/projected/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-kube-api-access-cqgcx\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.382305 master-0 kubenswrapper[17411]: I0223 13:25:26.382299 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-webhook-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.382650 master-0 kubenswrapper[17411]: I0223 13:25:26.382337 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-apiservice-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.387180 master-0 kubenswrapper[17411]: I0223 13:25:26.387135 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-apiservice-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.391558 master-0 kubenswrapper[17411]: I0223 13:25:26.391523 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-webhook-cert\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.439284 master-0 kubenswrapper[17411]: I0223 13:25:26.439228 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgcx\" (UniqueName: \"kubernetes.io/projected/7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb-kube-api-access-cqgcx\") pod \"metallb-operator-controller-manager-9c7dc799c-sfbtp\" (UID: \"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb\") " pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.491977 master-0 kubenswrapper[17411]: I0223 13:25:26.491912 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:26.830274 master-0 kubenswrapper[17411]: I0223 13:25:26.825181 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm"] Feb 23 13:25:26.830274 master-0 kubenswrapper[17411]: I0223 13:25:26.826774 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:26.835688 master-0 kubenswrapper[17411]: I0223 13:25:26.835643 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 13:25:26.836027 master-0 kubenswrapper[17411]: I0223 13:25:26.835773 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:25:26.859422 master-0 kubenswrapper[17411]: I0223 13:25:26.858231 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm"] Feb 23 13:25:26.907281 master-0 kubenswrapper[17411]: I0223 13:25:26.905998 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-apiservice-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:26.907281 master-0 kubenswrapper[17411]: I0223 13:25:26.906072 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqgt\" (UniqueName: \"kubernetes.io/projected/e85ae301-6907-45eb-8e7e-5d204f555c34-kube-api-access-7cqgt\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:26.907281 master-0 kubenswrapper[17411]: I0223 13:25:26.906128 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-webhook-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.008099 master-0 kubenswrapper[17411]: I0223 13:25:27.008035 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-apiservice-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.008324 master-0 kubenswrapper[17411]: I0223 13:25:27.008139 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cqgt\" (UniqueName: \"kubernetes.io/projected/e85ae301-6907-45eb-8e7e-5d204f555c34-kube-api-access-7cqgt\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.008375 master-0 kubenswrapper[17411]: I0223 13:25:27.008318 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-webhook-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.013882 master-0 kubenswrapper[17411]: I0223 13:25:27.013844 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-webhook-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.026343 master-0 kubenswrapper[17411]: I0223 13:25:27.022898 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e85ae301-6907-45eb-8e7e-5d204f555c34-apiservice-cert\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.044934 master-0 kubenswrapper[17411]: I0223 13:25:27.032272 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cqgt\" (UniqueName: \"kubernetes.io/projected/e85ae301-6907-45eb-8e7e-5d204f555c34-kube-api-access-7cqgt\") pod \"metallb-operator-webhook-server-6784d56d47-htvpm\" (UID: \"e85ae301-6907-45eb-8e7e-5d204f555c34\") " pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.136486 master-0 kubenswrapper[17411]: I0223 13:25:27.135709 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp"] Feb 23 13:25:27.163313 master-0 kubenswrapper[17411]: I0223 13:25:27.163228 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:27.488321 master-0 kubenswrapper[17411]: I0223 13:25:27.487307 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" event={"ID":"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb","Type":"ContainerStarted","Data":"25e15803a72b87b1947a2cb9c334c3ab6df4d56494b47ab4ca4edaed172f4b2c"} Feb 23 13:25:27.878158 master-0 kubenswrapper[17411]: W0223 13:25:27.878077 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85ae301_6907_45eb_8e7e_5d204f555c34.slice/crio-43f744b5122f29ddb3fa8964e5dc958bd6bd28515870fc712724624212ca20d6 WatchSource:0}: Error finding container 43f744b5122f29ddb3fa8964e5dc958bd6bd28515870fc712724624212ca20d6: Status 404 returned error can't find the container with id 43f744b5122f29ddb3fa8964e5dc958bd6bd28515870fc712724624212ca20d6 Feb 23 13:25:27.886666 master-0 kubenswrapper[17411]: I0223 13:25:27.886615 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm"] Feb 23 13:25:28.516771 master-0 kubenswrapper[17411]: I0223 13:25:28.516671 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" event={"ID":"e85ae301-6907-45eb-8e7e-5d204f555c34","Type":"ContainerStarted","Data":"43f744b5122f29ddb3fa8964e5dc958bd6bd28515870fc712724624212ca20d6"} Feb 23 13:25:31.579944 master-0 kubenswrapper[17411]: I0223 13:25:31.579870 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" event={"ID":"7b6ec24d-9de8-4d37-80a9-ce0f1b628bbb","Type":"ContainerStarted","Data":"b656d2522c85c6eda70d38e6f0a7fed82afdb90c5c84e8ca95b1b5b9921ffd31"} Feb 23 13:25:31.580496 master-0 kubenswrapper[17411]: I0223 13:25:31.580463 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:25:31.607538 master-0 kubenswrapper[17411]: I0223 13:25:31.607435 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" podStartSLOduration=1.8013762309999999 podStartE2EDuration="5.607415331s" podCreationTimestamp="2026-02-23 13:25:26 +0000 UTC" firstStartedPulling="2026-02-23 13:25:27.13970633 +0000 UTC m=+1120.567212937" lastFinishedPulling="2026-02-23 13:25:30.94574544 +0000 UTC m=+1124.373252037" observedRunningTime="2026-02-23 13:25:31.602856792 +0000 UTC m=+1125.030363409" watchObservedRunningTime="2026-02-23 13:25:31.607415331 +0000 UTC m=+1125.034921928" Feb 23 13:25:31.648441 master-0 kubenswrapper[17411]: I0223 13:25:31.648387 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-jbvst" Feb 23 13:25:32.550577 master-0 kubenswrapper[17411]: I0223 13:25:32.550532 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-fv6s7"] Feb 23 13:25:32.567690 master-0 kubenswrapper[17411]: I0223 13:25:32.567632 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.592817 master-0 kubenswrapper[17411]: I0223 13:25:32.592767 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-fv6s7"] Feb 23 13:25:32.676515 master-0 kubenswrapper[17411]: I0223 13:25:32.676458 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s59ft\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-kube-api-access-s59ft\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.676772 master-0 kubenswrapper[17411]: I0223 13:25:32.676579 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-bound-sa-token\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.694424 master-0 kubenswrapper[17411]: I0223 13:25:32.694353 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6"] Feb 23 13:25:32.695576 master-0 kubenswrapper[17411]: I0223 13:25:32.695526 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" Feb 23 13:25:32.700517 master-0 kubenswrapper[17411]: I0223 13:25:32.700476 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 23 13:25:32.700823 master-0 kubenswrapper[17411]: I0223 13:25:32.700793 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 23 13:25:32.708319 master-0 kubenswrapper[17411]: I0223 13:25:32.708236 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6"] Feb 23 13:25:32.782300 master-0 kubenswrapper[17411]: I0223 13:25:32.778991 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s59ft\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-kube-api-access-s59ft\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.782300 master-0 kubenswrapper[17411]: I0223 13:25:32.779130 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtvx\" (UniqueName: \"kubernetes.io/projected/65551c85-ca1f-425a-9a64-c90b5c1723fc-kube-api-access-thtvx\") pod \"obo-prometheus-operator-68bc856cb9-27wd6\" (UID: \"65551c85-ca1f-425a-9a64-c90b5c1723fc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" Feb 23 13:25:32.782300 master-0 kubenswrapper[17411]: I0223 13:25:32.779209 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-bound-sa-token\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.817778 master-0 kubenswrapper[17411]: I0223 13:25:32.817629 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-bound-sa-token\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.832773 master-0 kubenswrapper[17411]: I0223 13:25:32.832588 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s59ft\" (UniqueName: \"kubernetes.io/projected/e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f-kube-api-access-s59ft\") pod \"cert-manager-545d4d4674-fv6s7\" (UID: \"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f\") " pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.848282 master-0 kubenswrapper[17411]: I0223 13:25:32.846409 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv"] Feb 23 13:25:32.848282 master-0 kubenswrapper[17411]: I0223 13:25:32.847935 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:32.852480 master-0 kubenswrapper[17411]: I0223 13:25:32.852422 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 23 13:25:32.857161 master-0 kubenswrapper[17411]: I0223 13:25:32.857067 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv"] Feb 23 13:25:32.865078 master-0 kubenswrapper[17411]: I0223 13:25:32.864998 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867"] Feb 23 13:25:32.877191 master-0 kubenswrapper[17411]: I0223 13:25:32.876813 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:32.880386 master-0 kubenswrapper[17411]: I0223 13:25:32.880357 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtvx\" (UniqueName: \"kubernetes.io/projected/65551c85-ca1f-425a-9a64-c90b5c1723fc-kube-api-access-thtvx\") pod \"obo-prometheus-operator-68bc856cb9-27wd6\" (UID: \"65551c85-ca1f-425a-9a64-c90b5c1723fc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" Feb 23 13:25:32.903930 master-0 kubenswrapper[17411]: I0223 13:25:32.903771 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867"] Feb 23 13:25:32.909882 master-0 kubenswrapper[17411]: I0223 13:25:32.909818 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtvx\" (UniqueName: \"kubernetes.io/projected/65551c85-ca1f-425a-9a64-c90b5c1723fc-kube-api-access-thtvx\") pod \"obo-prometheus-operator-68bc856cb9-27wd6\" (UID: \"65551c85-ca1f-425a-9a64-c90b5c1723fc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" Feb 23 13:25:32.924291 master-0 kubenswrapper[17411]: I0223 13:25:32.922940 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-fv6s7" Feb 23 13:25:32.984675 master-0 kubenswrapper[17411]: I0223 13:25:32.984604 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:32.985048 master-0 kubenswrapper[17411]: I0223 13:25:32.984755 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:32.985048 master-0 kubenswrapper[17411]: I0223 13:25:32.984801 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:32.987266 master-0 kubenswrapper[17411]: I0223 13:25:32.986080 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:32.995292 master-0 kubenswrapper[17411]: I0223 13:25:32.994768 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g49wk"] Feb 23 13:25:32.999424 master-0 kubenswrapper[17411]: I0223 13:25:32.996356 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.002732 master-0 kubenswrapper[17411]: I0223 13:25:33.002667 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 23 13:25:33.031272 master-0 kubenswrapper[17411]: I0223 13:25:33.031196 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g49wk"] Feb 23 13:25:33.046831 master-0 kubenswrapper[17411]: I0223 13:25:33.046710 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" Feb 23 13:25:33.088291 master-0 kubenswrapper[17411]: I0223 13:25:33.088069 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:33.088291 master-0 kubenswrapper[17411]: I0223 13:25:33.088172 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ls5v\" (UniqueName: \"kubernetes.io/projected/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-kube-api-access-7ls5v\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.088291 master-0 kubenswrapper[17411]: I0223 13:25:33.088204 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.088291 master-0 kubenswrapper[17411]: I0223 13:25:33.088228 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:33.088291 master-0 kubenswrapper[17411]: I0223 13:25:33.088289 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:33.088647 master-0 kubenswrapper[17411]: I0223 13:25:33.088327 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:33.093294 master-0 kubenswrapper[17411]: I0223 13:25:33.092964 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:33.094395 master-0 kubenswrapper[17411]: I0223 13:25:33.094351 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-d6867\" (UID: \"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:33.098952 master-0 kubenswrapper[17411]: I0223 13:25:33.098878 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:33.098952 master-0 kubenswrapper[17411]: I0223 13:25:33.098903 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/176a327d-bc57-4758-bbf8-71e7aae63a8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89788d749-xtdzv\" (UID: \"176a327d-bc57-4758-bbf8-71e7aae63a8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:33.163784 master-0 kubenswrapper[17411]: I0223 13:25:33.163704 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pg7qc"] Feb 23 13:25:33.165412 master-0 kubenswrapper[17411]: I0223 13:25:33.165366 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.187664 master-0 kubenswrapper[17411]: I0223 13:25:33.184656 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pg7qc"] Feb 23 13:25:33.193383 master-0 kubenswrapper[17411]: I0223 13:25:33.191676 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ls5v\" (UniqueName: \"kubernetes.io/projected/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-kube-api-access-7ls5v\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.194498 master-0 kubenswrapper[17411]: I0223 13:25:33.193837 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.196304 master-0 kubenswrapper[17411]: I0223 13:25:33.195997 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" Feb 23 13:25:33.203836 master-0 kubenswrapper[17411]: I0223 13:25:33.203759 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.211599 master-0 kubenswrapper[17411]: I0223 13:25:33.211412 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" Feb 23 13:25:33.214441 master-0 kubenswrapper[17411]: I0223 13:25:33.214390 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ls5v\" (UniqueName: \"kubernetes.io/projected/8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3-kube-api-access-7ls5v\") pod \"observability-operator-59bdc8b94-g49wk\" (UID: \"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3\") " pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.303286 master-0 kubenswrapper[17411]: I0223 13:25:33.297816 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb3e89a-b60c-49ab-a630-65aafebc70be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.303286 master-0 kubenswrapper[17411]: I0223 13:25:33.297899 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9v7\" (UniqueName: \"kubernetes.io/projected/4bb3e89a-b60c-49ab-a630-65aafebc70be-kube-api-access-pb9v7\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.337290 master-0 kubenswrapper[17411]: I0223 13:25:33.333404 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:33.406352 master-0 kubenswrapper[17411]: I0223 13:25:33.402594 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb3e89a-b60c-49ab-a630-65aafebc70be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.406352 master-0 kubenswrapper[17411]: I0223 13:25:33.402721 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9v7\" (UniqueName: \"kubernetes.io/projected/4bb3e89a-b60c-49ab-a630-65aafebc70be-kube-api-access-pb9v7\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.406352 master-0 kubenswrapper[17411]: I0223 13:25:33.403831 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bb3e89a-b60c-49ab-a630-65aafebc70be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.442566 master-0 kubenswrapper[17411]: I0223 13:25:33.442514 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9v7\" (UniqueName: \"kubernetes.io/projected/4bb3e89a-b60c-49ab-a630-65aafebc70be-kube-api-access-pb9v7\") pod \"perses-operator-5bf474d74f-pg7qc\" (UID: \"4bb3e89a-b60c-49ab-a630-65aafebc70be\") " pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:33.512255 master-0 kubenswrapper[17411]: I0223 13:25:33.512179 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:35.375220 master-0 kubenswrapper[17411]: I0223 13:25:35.375082 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv"] Feb 23 13:25:35.380420 master-0 kubenswrapper[17411]: I0223 13:25:35.380374 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-fv6s7"] Feb 23 13:25:35.392334 master-0 kubenswrapper[17411]: I0223 13:25:35.389329 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g49wk"] Feb 23 13:25:35.543389 master-0 kubenswrapper[17411]: I0223 13:25:35.543312 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867"] Feb 23 13:25:35.550629 master-0 kubenswrapper[17411]: I0223 13:25:35.550572 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6"] Feb 23 13:25:35.556281 master-0 kubenswrapper[17411]: I0223 13:25:35.556145 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pg7qc"] Feb 23 13:25:35.677299 master-0 kubenswrapper[17411]: I0223 13:25:35.677156 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-fv6s7" event={"ID":"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f","Type":"ContainerStarted","Data":"80be8b83cb769908aa7db7a0f48b8727ddc077957b8bd197ce4dd65f84ab353c"} Feb 23 13:25:35.677299 master-0 kubenswrapper[17411]: I0223 13:25:35.677234 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-fv6s7" event={"ID":"e13e1a58-f0aa-4c41-8a3b-b2d23d54f04f","Type":"ContainerStarted","Data":"00f445208bd17c590be05e58b4a783220126fb8ed3cc9fa0804e3e4000cc2d12"} Feb 23 13:25:35.678525 master-0 kubenswrapper[17411]: I0223 13:25:35.678486 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" event={"ID":"4bb3e89a-b60c-49ab-a630-65aafebc70be","Type":"ContainerStarted","Data":"a43a09e8f315213834c2f6443b87acde947eebabcb4cd68e6291239ebcc2e9fa"} Feb 23 13:25:35.679681 master-0 kubenswrapper[17411]: I0223 13:25:35.679654 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" event={"ID":"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3","Type":"ContainerStarted","Data":"44c1eca936b43a1a4796c238adc69186399d0d9fea55dfe97bf89ea4c8bdfdee"} Feb 23 13:25:35.681660 master-0 kubenswrapper[17411]: I0223 13:25:35.681609 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" event={"ID":"e85ae301-6907-45eb-8e7e-5d204f555c34","Type":"ContainerStarted","Data":"89c846c6e1567a62622d389ff10e9817c44d5c9034d3977538eafb5b0643481a"} Feb 23 13:25:35.681808 master-0 kubenswrapper[17411]: I0223 13:25:35.681785 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:35.682894 master-0 kubenswrapper[17411]: I0223 13:25:35.682865 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" event={"ID":"65551c85-ca1f-425a-9a64-c90b5c1723fc","Type":"ContainerStarted","Data":"4e336b149a7fceaa34d24abced3a6d79420c09deeac7c3671c92a2a9efd919b6"} Feb 23 13:25:35.684324 master-0 kubenswrapper[17411]: I0223 13:25:35.684271 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" event={"ID":"176a327d-bc57-4758-bbf8-71e7aae63a8b","Type":"ContainerStarted","Data":"d7538b8e8888e4f80a88d5f44b7fc05c2c0e12f4c72dfb798a26d28576bf1261"} Feb 23 13:25:35.685463 master-0 kubenswrapper[17411]: I0223 13:25:35.685432 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" event={"ID":"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9","Type":"ContainerStarted","Data":"ee2b164e2775bb1bb99202a9f0983187b086ee639f2109cb22b44b0ce98c21c3"} Feb 23 13:25:35.864935 master-0 kubenswrapper[17411]: I0223 13:25:35.864845 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-fv6s7" podStartSLOduration=3.864821813 podStartE2EDuration="3.864821813s" podCreationTimestamp="2026-02-23 13:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:25:35.864652588 +0000 UTC m=+1129.292159185" watchObservedRunningTime="2026-02-23 13:25:35.864821813 +0000 UTC m=+1129.292328410" Feb 23 13:25:36.050995 master-0 kubenswrapper[17411]: I0223 13:25:36.050900 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" podStartSLOduration=3.429484171 podStartE2EDuration="10.050856292s" podCreationTimestamp="2026-02-23 13:25:26 +0000 UTC" firstStartedPulling="2026-02-23 13:25:27.881131619 +0000 UTC m=+1121.308638226" lastFinishedPulling="2026-02-23 13:25:34.50250375 +0000 UTC m=+1127.930010347" observedRunningTime="2026-02-23 13:25:36.047344962 +0000 UTC m=+1129.474851589" watchObservedRunningTime="2026-02-23 13:25:36.050856292 +0000 UTC m=+1129.478362889" Feb 23 13:25:47.166582 master-0 kubenswrapper[17411]: I0223 13:25:47.166512 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6784d56d47-htvpm" Feb 23 13:25:50.122591 master-0 kubenswrapper[17411]: I0223 13:25:50.122510 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" event={"ID":"4bb3e89a-b60c-49ab-a630-65aafebc70be","Type":"ContainerStarted","Data":"b10d1a4a4a757a659a0aa98e0369f552c0daec01050571157e165f7da5a63ae9"} Feb 23 13:25:50.123176 master-0 kubenswrapper[17411]: I0223 13:25:50.122719 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:25:50.124539 master-0 kubenswrapper[17411]: I0223 13:25:50.124477 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" event={"ID":"8d5c6a60-d65e-403f-9f1d-f6c4e8e285d3","Type":"ContainerStarted","Data":"35eaf504f95b8ff0ed35bce27f5e1751a9f75e94b4024aa65521ea13fb9d9b99"} Feb 23 13:25:50.124702 master-0 kubenswrapper[17411]: I0223 13:25:50.124678 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:50.128264 master-0 kubenswrapper[17411]: I0223 13:25:50.127775 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" event={"ID":"65551c85-ca1f-425a-9a64-c90b5c1723fc","Type":"ContainerStarted","Data":"1fcd839cec519cdac824aaec56f119742e69b5450ed2fca3891b697c2deaa0e1"} Feb 23 13:25:50.137280 master-0 kubenswrapper[17411]: I0223 13:25:50.136409 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" Feb 23 13:25:50.145273 master-0 kubenswrapper[17411]: I0223 13:25:50.143306 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" event={"ID":"176a327d-bc57-4758-bbf8-71e7aae63a8b","Type":"ContainerStarted","Data":"50725831e1602ae0e9dc220c14ca70f7659eebd0cb29ae57ccc1346c24e4ddf8"} Feb 23 13:25:50.153761 master-0 kubenswrapper[17411]: I0223 13:25:50.153688 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" event={"ID":"cb90f1d0-76a7-4450-ae88-9cb0b30ef2e9","Type":"ContainerStarted","Data":"72b4460a0f581bcf558c6b07c3cdeab6137e7c1112e7a869109bf1bb07c335c8"} Feb 23 13:25:50.163942 master-0 kubenswrapper[17411]: I0223 13:25:50.163865 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" podStartSLOduration=3.630148657 podStartE2EDuration="17.163847031s" podCreationTimestamp="2026-02-23 13:25:33 +0000 UTC" firstStartedPulling="2026-02-23 13:25:35.573740927 +0000 UTC m=+1129.001247524" lastFinishedPulling="2026-02-23 13:25:49.107439301 +0000 UTC m=+1142.534945898" observedRunningTime="2026-02-23 13:25:50.157704205 +0000 UTC m=+1143.585210802" watchObservedRunningTime="2026-02-23 13:25:50.163847031 +0000 UTC m=+1143.591353628" Feb 23 13:25:50.188457 master-0 kubenswrapper[17411]: I0223 13:25:50.188344 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-xtdzv" podStartSLOduration=4.456355694 podStartE2EDuration="18.188320645s" podCreationTimestamp="2026-02-23 13:25:32 +0000 UTC" firstStartedPulling="2026-02-23 13:25:35.377183309 +0000 UTC m=+1128.804689906" lastFinishedPulling="2026-02-23 13:25:49.10914826 +0000 UTC m=+1142.536654857" observedRunningTime="2026-02-23 13:25:50.183020263 +0000 UTC m=+1143.610526890" watchObservedRunningTime="2026-02-23 13:25:50.188320645 +0000 UTC m=+1143.615827262" Feb 23 13:25:50.231970 master-0 kubenswrapper[17411]: I0223 13:25:50.231557 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-27wd6" podStartSLOduration=4.714405796 podStartE2EDuration="18.231531018s" podCreationTimestamp="2026-02-23 13:25:32 +0000 UTC" firstStartedPulling="2026-02-23 13:25:35.556338192 +0000 UTC m=+1128.983844789" lastFinishedPulling="2026-02-23 13:25:49.073463414 +0000 UTC m=+1142.500970011" observedRunningTime="2026-02-23 13:25:50.211974706 +0000 UTC m=+1143.639481313" watchObservedRunningTime="2026-02-23 13:25:50.231531018 +0000 UTC m=+1143.659037615" Feb 23 13:25:50.281294 master-0 kubenswrapper[17411]: I0223 13:25:50.281158 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89788d749-d6867" podStartSLOduration=4.772047052 podStartE2EDuration="18.281130385s" podCreationTimestamp="2026-02-23 13:25:32 +0000 UTC" firstStartedPulling="2026-02-23 13:25:35.564867985 +0000 UTC m=+1128.992374582" lastFinishedPulling="2026-02-23 13:25:49.073951318 +0000 UTC m=+1142.501457915" observedRunningTime="2026-02-23 13:25:50.246998173 +0000 UTC m=+1143.674504770" watchObservedRunningTime="2026-02-23 13:25:50.281130385 +0000 UTC m=+1143.708636982" Feb 23 13:25:50.338267 master-0 kubenswrapper[17411]: I0223 13:25:50.331901 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-g49wk" podStartSLOduration=4.574418909 podStartE2EDuration="18.331732031s" podCreationTimestamp="2026-02-23 13:25:32 +0000 UTC" firstStartedPulling="2026-02-23 13:25:35.381285575 +0000 UTC m=+1128.808792172" lastFinishedPulling="2026-02-23 13:25:49.138598697 +0000 UTC m=+1142.566105294" observedRunningTime="2026-02-23 13:25:50.29484152 +0000 UTC m=+1143.722348117" watchObservedRunningTime="2026-02-23 13:25:50.331732031 +0000 UTC m=+1143.759238628" Feb 23 13:26:03.517680 master-0 kubenswrapper[17411]: I0223 13:26:03.517587 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pg7qc" Feb 23 13:26:06.497033 master-0 kubenswrapper[17411]: I0223 13:26:06.496938 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9c7dc799c-sfbtp" Feb 23 13:26:15.637649 master-0 kubenswrapper[17411]: I0223 13:26:15.637185 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9v4qh"] Feb 23 13:26:15.640747 master-0 kubenswrapper[17411]: I0223 13:26:15.640699 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.642504 master-0 kubenswrapper[17411]: I0223 13:26:15.642461 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 13:26:15.642983 master-0 kubenswrapper[17411]: I0223 13:26:15.642967 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 13:26:15.658271 master-0 kubenswrapper[17411]: I0223 13:26:15.657489 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2"] Feb 23 13:26:15.659034 master-0 kubenswrapper[17411]: I0223 13:26:15.658841 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.662758 master-0 kubenswrapper[17411]: I0223 13:26:15.662030 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 13:26:15.675482 master-0 kubenswrapper[17411]: I0223 13:26:15.675405 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2"] Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680590 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-reloader\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680684 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-conf\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680722 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-metrics\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680787 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8699c\" (UniqueName: \"kubernetes.io/projected/844d9bcd-edea-48a2-b38c-38669b47ed0b-kube-api-access-8699c\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680817 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f31917ab-7d72-4bb3-8378-406df839219d-frr-startup\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680850 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghdck\" (UniqueName: \"kubernetes.io/projected/f31917ab-7d72-4bb3-8378-406df839219d-kube-api-access-ghdck\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680887 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/844d9bcd-edea-48a2-b38c-38669b47ed0b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680934 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f31917ab-7d72-4bb3-8378-406df839219d-metrics-certs\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.683988 master-0 kubenswrapper[17411]: I0223 13:26:15.680973 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-sockets\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.765435 master-0 kubenswrapper[17411]: I0223 13:26:15.765299 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jl7tk"] Feb 23 13:26:15.767127 master-0 kubenswrapper[17411]: I0223 13:26:15.767099 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.769112 master-0 kubenswrapper[17411]: I0223 13:26:15.769083 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 13:26:15.769679 master-0 kubenswrapper[17411]: I0223 13:26:15.769531 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 13:26:15.770122 master-0 kubenswrapper[17411]: I0223 13:26:15.770107 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 13:26:15.783002 master-0 kubenswrapper[17411]: I0223 13:26:15.782949 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8699c\" (UniqueName: \"kubernetes.io/projected/844d9bcd-edea-48a2-b38c-38669b47ed0b-kube-api-access-8699c\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.783092 master-0 kubenswrapper[17411]: I0223 13:26:15.783036 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f31917ab-7d72-4bb3-8378-406df839219d-frr-startup\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.783162 master-0 kubenswrapper[17411]: I0223 13:26:15.783130 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-metrics-certs\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.783216 master-0 kubenswrapper[17411]: I0223 13:26:15.783175 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghdck\" (UniqueName: \"kubernetes.io/projected/f31917ab-7d72-4bb3-8378-406df839219d-kube-api-access-ghdck\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.783263 master-0 kubenswrapper[17411]: I0223 13:26:15.783230 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/844d9bcd-edea-48a2-b38c-38669b47ed0b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.783535 master-0 kubenswrapper[17411]: I0223 13:26:15.783511 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f31917ab-7d72-4bb3-8378-406df839219d-metrics-certs\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.783602 master-0 kubenswrapper[17411]: I0223 13:26:15.783582 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.783637 master-0 kubenswrapper[17411]: I0223 13:26:15.783622 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-sockets\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.783885 master-0 kubenswrapper[17411]: I0223 13:26:15.783856 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c98ff941-bd8e-4080-905c-d6d0a800ac06-metallb-excludel2\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.786566 master-0 kubenswrapper[17411]: I0223 13:26:15.786530 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-reloader\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.786697 master-0 kubenswrapper[17411]: I0223 13:26:15.786666 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-conf\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.786799 master-0 kubenswrapper[17411]: I0223 13:26:15.786719 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czrjq\" (UniqueName: \"kubernetes.io/projected/c98ff941-bd8e-4080-905c-d6d0a800ac06-kube-api-access-czrjq\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.786799 master-0 kubenswrapper[17411]: I0223 13:26:15.786780 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-metrics\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.787231 master-0 kubenswrapper[17411]: I0223 13:26:15.787184 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f31917ab-7d72-4bb3-8378-406df839219d-frr-startup\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.787490 master-0 kubenswrapper[17411]: I0223 13:26:15.787448 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f31917ab-7d72-4bb3-8378-406df839219d-metrics-certs\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.787529 master-0 kubenswrapper[17411]: I0223 13:26:15.787493 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-metrics\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.787566 master-0 kubenswrapper[17411]: I0223 13:26:15.787526 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-reloader\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.787728 master-0 kubenswrapper[17411]: I0223 13:26:15.787699 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-sockets\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.788087 master-0 kubenswrapper[17411]: I0223 13:26:15.788040 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f31917ab-7d72-4bb3-8378-406df839219d-frr-conf\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.788355 master-0 kubenswrapper[17411]: I0223 13:26:15.788328 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/844d9bcd-edea-48a2-b38c-38669b47ed0b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.796351 master-0 kubenswrapper[17411]: I0223 13:26:15.796268 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-wc65n"] Feb 23 13:26:15.800781 master-0 kubenswrapper[17411]: I0223 13:26:15.799035 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.809329 master-0 kubenswrapper[17411]: I0223 13:26:15.805627 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 13:26:15.811881 master-0 kubenswrapper[17411]: I0223 13:26:15.811714 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-wc65n"] Feb 23 13:26:15.818392 master-0 kubenswrapper[17411]: I0223 13:26:15.816490 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8699c\" (UniqueName: \"kubernetes.io/projected/844d9bcd-edea-48a2-b38c-38669b47ed0b-kube-api-access-8699c\") pod \"frr-k8s-webhook-server-78b44bf5bb-jglz2\" (UID: \"844d9bcd-edea-48a2-b38c-38669b47ed0b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.835338 master-0 kubenswrapper[17411]: I0223 13:26:15.835113 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghdck\" (UniqueName: \"kubernetes.io/projected/f31917ab-7d72-4bb3-8378-406df839219d-kube-api-access-ghdck\") pod \"frr-k8s-9v4qh\" (UID: \"f31917ab-7d72-4bb3-8378-406df839219d\") " pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.888573 master-0 kubenswrapper[17411]: I0223 13:26:15.888467 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czrjq\" (UniqueName: \"kubernetes.io/projected/c98ff941-bd8e-4080-905c-d6d0a800ac06-kube-api-access-czrjq\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.888751 master-0 kubenswrapper[17411]: I0223 13:26:15.888715 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-metrics-certs\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.888795 master-0 kubenswrapper[17411]: I0223 13:26:15.888766 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-cert\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.888795 master-0 kubenswrapper[17411]: I0223 13:26:15.888793 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-metrics-certs\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.888867 master-0 kubenswrapper[17411]: I0223 13:26:15.888834 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.888911 master-0 kubenswrapper[17411]: I0223 13:26:15.888890 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c98ff941-bd8e-4080-905c-d6d0a800ac06-metallb-excludel2\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.889725 master-0 kubenswrapper[17411]: I0223 13:26:15.889686 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c98ff941-bd8e-4080-905c-d6d0a800ac06-metallb-excludel2\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.889847 master-0 kubenswrapper[17411]: E0223 13:26:15.889122 17411 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:26:15.889885 master-0 kubenswrapper[17411]: I0223 13:26:15.889786 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzzw\" (UniqueName: \"kubernetes.io/projected/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-kube-api-access-8fzzw\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.889925 master-0 kubenswrapper[17411]: E0223 13:26:15.889904 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist podName:c98ff941-bd8e-4080-905c-d6d0a800ac06 nodeName:}" failed. No retries permitted until 2026-02-23 13:26:16.389877963 +0000 UTC m=+1169.817384560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist") pod "speaker-jl7tk" (UID: "c98ff941-bd8e-4080-905c-d6d0a800ac06") : secret "metallb-memberlist" not found Feb 23 13:26:15.892434 master-0 kubenswrapper[17411]: I0223 13:26:15.892392 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-metrics-certs\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.905731 master-0 kubenswrapper[17411]: I0223 13:26:15.905671 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czrjq\" (UniqueName: \"kubernetes.io/projected/c98ff941-bd8e-4080-905c-d6d0a800ac06-kube-api-access-czrjq\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:15.962418 master-0 kubenswrapper[17411]: I0223 13:26:15.962370 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:15.986190 master-0 kubenswrapper[17411]: I0223 13:26:15.986119 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:15.991560 master-0 kubenswrapper[17411]: I0223 13:26:15.991482 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-cert\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.991560 master-0 kubenswrapper[17411]: I0223 13:26:15.991541 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-metrics-certs\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.992213 master-0 kubenswrapper[17411]: I0223 13:26:15.992184 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzzw\" (UniqueName: \"kubernetes.io/projected/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-kube-api-access-8fzzw\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:15.993957 master-0 kubenswrapper[17411]: I0223 13:26:15.993734 17411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:26:16.000219 master-0 kubenswrapper[17411]: I0223 13:26:16.000176 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-metrics-certs\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:16.005322 master-0 kubenswrapper[17411]: I0223 13:26:16.005296 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-cert\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:16.008543 master-0 kubenswrapper[17411]: I0223 13:26:16.008523 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzzw\" (UniqueName: \"kubernetes.io/projected/93dc0dd1-e259-4214-99c8-180fa7ac5ee8-kube-api-access-8fzzw\") pod \"controller-69bbfbf88f-wc65n\" (UID: \"93dc0dd1-e259-4214-99c8-180fa7ac5ee8\") " pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:16.183463 master-0 kubenswrapper[17411]: I0223 13:26:16.183350 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:16.389049 master-0 kubenswrapper[17411]: I0223 13:26:16.388982 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"e82b47f76401e53d15683a0b4176083de8699f9f93fc8d5e37c13816ff367e61"} Feb 23 13:26:16.434333 master-0 kubenswrapper[17411]: I0223 13:26:16.433319 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:16.434333 master-0 kubenswrapper[17411]: E0223 13:26:16.433545 17411 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:26:16.434333 master-0 kubenswrapper[17411]: E0223 13:26:16.433666 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist podName:c98ff941-bd8e-4080-905c-d6d0a800ac06 nodeName:}" failed. No retries permitted until 2026-02-23 13:26:17.433639205 +0000 UTC m=+1170.861145822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist") pod "speaker-jl7tk" (UID: "c98ff941-bd8e-4080-905c-d6d0a800ac06") : secret "metallb-memberlist" not found Feb 23 13:26:16.468985 master-0 kubenswrapper[17411]: I0223 13:26:16.468896 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2"] Feb 23 13:26:16.481571 master-0 kubenswrapper[17411]: W0223 13:26:16.481369 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod844d9bcd_edea_48a2_b38c_38669b47ed0b.slice/crio-3f74b91f80537eca3c49fc3011c5993db8f31de3f430e775cd39cfed0d507c0c WatchSource:0}: Error finding container 3f74b91f80537eca3c49fc3011c5993db8f31de3f430e775cd39cfed0d507c0c: Status 404 returned error can't find the container with id 3f74b91f80537eca3c49fc3011c5993db8f31de3f430e775cd39cfed0d507c0c Feb 23 13:26:16.627742 master-0 kubenswrapper[17411]: I0223 13:26:16.626536 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-wc65n"] Feb 23 13:26:17.402318 master-0 kubenswrapper[17411]: I0223 13:26:17.400662 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-wc65n" event={"ID":"93dc0dd1-e259-4214-99c8-180fa7ac5ee8","Type":"ContainerStarted","Data":"a1f115a2171e5b945e3155182a45d73df31cb00021dcd14e029a8004af5652b2"} Feb 23 13:26:17.402318 master-0 kubenswrapper[17411]: I0223 13:26:17.400727 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-wc65n" event={"ID":"93dc0dd1-e259-4214-99c8-180fa7ac5ee8","Type":"ContainerStarted","Data":"dc5d7f97c506eb81a89f20156b6ed91ad37e2594884154f77322e5e27f0a4e22"} Feb 23 13:26:17.404272 master-0 kubenswrapper[17411]: I0223 13:26:17.404217 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" event={"ID":"844d9bcd-edea-48a2-b38c-38669b47ed0b","Type":"ContainerStarted","Data":"3f74b91f80537eca3c49fc3011c5993db8f31de3f430e775cd39cfed0d507c0c"} Feb 23 13:26:17.464658 master-0 kubenswrapper[17411]: I0223 13:26:17.464577 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:17.477414 master-0 kubenswrapper[17411]: I0223 13:26:17.477364 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c98ff941-bd8e-4080-905c-d6d0a800ac06-memberlist\") pod \"speaker-jl7tk\" (UID: \"c98ff941-bd8e-4080-905c-d6d0a800ac06\") " pod="metallb-system/speaker-jl7tk" Feb 23 13:26:17.589302 master-0 kubenswrapper[17411]: I0223 13:26:17.587732 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jl7tk" Feb 23 13:26:17.897381 master-0 kubenswrapper[17411]: I0223 13:26:17.897307 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9"] Feb 23 13:26:17.903982 master-0 kubenswrapper[17411]: I0223 13:26:17.901070 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" Feb 23 13:26:17.918649 master-0 kubenswrapper[17411]: I0223 13:26:17.918605 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq"] Feb 23 13:26:17.919808 master-0 kubenswrapper[17411]: I0223 13:26:17.919784 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:17.921456 master-0 kubenswrapper[17411]: I0223 13:26:17.921384 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 13:26:17.931015 master-0 kubenswrapper[17411]: I0223 13:26:17.929575 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rjqzm"] Feb 23 13:26:17.932497 master-0 kubenswrapper[17411]: I0223 13:26:17.932447 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:17.945862 master-0 kubenswrapper[17411]: I0223 13:26:17.945766 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9"] Feb 23 13:26:17.958182 master-0 kubenswrapper[17411]: I0223 13:26:17.958085 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq"] Feb 23 13:26:17.983443 master-0 kubenswrapper[17411]: I0223 13:26:17.982868 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c767c86f-799e-4040-b693-80de292b69b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:17.983443 master-0 kubenswrapper[17411]: I0223 13:26:17.983067 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6bw\" (UniqueName: \"kubernetes.io/projected/b1c6cc1a-e494-4590-b07b-99725d2511d6-kube-api-access-gh6bw\") pod \"nmstate-metrics-58c85c668d-8lbm9\" (UID: \"b1c6cc1a-e494-4590-b07b-99725d2511d6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" Feb 23 13:26:17.983443 master-0 kubenswrapper[17411]: I0223 13:26:17.983110 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncd9h\" (UniqueName: \"kubernetes.io/projected/c767c86f-799e-4040-b693-80de292b69b4-kube-api-access-ncd9h\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.084596 master-0 kubenswrapper[17411]: I0223 13:26:18.084550 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkcn4\" (UniqueName: \"kubernetes.io/projected/d0fc88e1-f67b-4e1d-921f-b861051f6558-kube-api-access-nkcn4\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.084596 master-0 kubenswrapper[17411]: I0223 13:26:18.084603 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh6bw\" (UniqueName: \"kubernetes.io/projected/b1c6cc1a-e494-4590-b07b-99725d2511d6-kube-api-access-gh6bw\") pod \"nmstate-metrics-58c85c668d-8lbm9\" (UID: \"b1c6cc1a-e494-4590-b07b-99725d2511d6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" Feb 23 13:26:18.084855 master-0 kubenswrapper[17411]: I0223 13:26:18.084630 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncd9h\" (UniqueName: \"kubernetes.io/projected/c767c86f-799e-4040-b693-80de292b69b4-kube-api-access-ncd9h\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.084855 master-0 kubenswrapper[17411]: I0223 13:26:18.084762 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-dbus-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.084944 master-0 kubenswrapper[17411]: I0223 13:26:18.084874 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-ovs-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.084995 master-0 kubenswrapper[17411]: I0223 13:26:18.084973 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-nmstate-lock\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.085049 master-0 kubenswrapper[17411]: I0223 13:26:18.085019 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c767c86f-799e-4040-b693-80de292b69b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.089202 master-0 kubenswrapper[17411]: I0223 13:26:18.089137 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c767c86f-799e-4040-b693-80de292b69b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.093877 master-0 kubenswrapper[17411]: I0223 13:26:18.093814 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9"] Feb 23 13:26:18.095037 master-0 kubenswrapper[17411]: I0223 13:26:18.094999 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.098035 master-0 kubenswrapper[17411]: I0223 13:26:18.097981 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 13:26:18.098533 master-0 kubenswrapper[17411]: I0223 13:26:18.098484 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 13:26:18.101156 master-0 kubenswrapper[17411]: I0223 13:26:18.101118 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9"] Feb 23 13:26:18.103630 master-0 kubenswrapper[17411]: I0223 13:26:18.103596 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh6bw\" (UniqueName: \"kubernetes.io/projected/b1c6cc1a-e494-4590-b07b-99725d2511d6-kube-api-access-gh6bw\") pod \"nmstate-metrics-58c85c668d-8lbm9\" (UID: \"b1c6cc1a-e494-4590-b07b-99725d2511d6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" Feb 23 13:26:18.114737 master-0 kubenswrapper[17411]: I0223 13:26:18.114634 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncd9h\" (UniqueName: \"kubernetes.io/projected/c767c86f-799e-4040-b693-80de292b69b4-kube-api-access-ncd9h\") pod \"nmstate-webhook-866bcb46dc-phsnq\" (UID: \"c767c86f-799e-4040-b693-80de292b69b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.187415 master-0 kubenswrapper[17411]: I0223 13:26:18.187187 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkcn4\" (UniqueName: \"kubernetes.io/projected/d0fc88e1-f67b-4e1d-921f-b861051f6558-kube-api-access-nkcn4\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187415 master-0 kubenswrapper[17411]: I0223 13:26:18.187298 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmgd\" (UniqueName: \"kubernetes.io/projected/8a3655d6-aa4a-4204-bb07-63038c6c6b75-kube-api-access-kcmgd\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.187415 master-0 kubenswrapper[17411]: I0223 13:26:18.187379 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3655d6-aa4a-4204-bb07-63038c6c6b75-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.187711 master-0 kubenswrapper[17411]: I0223 13:26:18.187440 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-dbus-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187711 master-0 kubenswrapper[17411]: I0223 13:26:18.187484 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-ovs-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187711 master-0 kubenswrapper[17411]: I0223 13:26:18.187531 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-nmstate-lock\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187711 master-0 kubenswrapper[17411]: I0223 13:26:18.187594 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.187825 master-0 kubenswrapper[17411]: I0223 13:26:18.187772 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-dbus-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187859 master-0 kubenswrapper[17411]: I0223 13:26:18.187816 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-ovs-socket\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.187893 master-0 kubenswrapper[17411]: I0223 13:26:18.187820 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d0fc88e1-f67b-4e1d-921f-b861051f6558-nmstate-lock\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.204988 master-0 kubenswrapper[17411]: I0223 13:26:18.204935 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkcn4\" (UniqueName: \"kubernetes.io/projected/d0fc88e1-f67b-4e1d-921f-b861051f6558-kube-api-access-nkcn4\") pod \"nmstate-handler-rjqzm\" (UID: \"d0fc88e1-f67b-4e1d-921f-b861051f6558\") " pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.239702 master-0 kubenswrapper[17411]: I0223 13:26:18.239619 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" Feb 23 13:26:18.259419 master-0 kubenswrapper[17411]: I0223 13:26:18.258482 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:18.287653 master-0 kubenswrapper[17411]: I0223 13:26:18.284204 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-659c85987-hknhl"] Feb 23 13:26:18.287653 master-0 kubenswrapper[17411]: I0223 13:26:18.285309 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.287653 master-0 kubenswrapper[17411]: I0223 13:26:18.285596 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:18.289814 master-0 kubenswrapper[17411]: I0223 13:26:18.289064 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmgd\" (UniqueName: \"kubernetes.io/projected/8a3655d6-aa4a-4204-bb07-63038c6c6b75-kube-api-access-kcmgd\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.289814 master-0 kubenswrapper[17411]: I0223 13:26:18.289173 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3655d6-aa4a-4204-bb07-63038c6c6b75-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.289814 master-0 kubenswrapper[17411]: I0223 13:26:18.289305 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.289814 master-0 kubenswrapper[17411]: E0223 13:26:18.289461 17411 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 23 13:26:18.289814 master-0 kubenswrapper[17411]: E0223 13:26:18.289522 17411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert podName:8a3655d6-aa4a-4204-bb07-63038c6c6b75 nodeName:}" failed. No retries permitted until 2026-02-23 13:26:18.789501294 +0000 UTC m=+1172.217007891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-br2j9" (UID: "8a3655d6-aa4a-4204-bb07-63038c6c6b75") : secret "plugin-serving-cert" not found Feb 23 13:26:18.290412 master-0 kubenswrapper[17411]: I0223 13:26:18.290320 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3655d6-aa4a-4204-bb07-63038c6c6b75-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.321597 master-0 kubenswrapper[17411]: I0223 13:26:18.321535 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-659c85987-hknhl"] Feb 23 13:26:18.325025 master-0 kubenswrapper[17411]: I0223 13:26:18.324978 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmgd\" (UniqueName: \"kubernetes.io/projected/8a3655d6-aa4a-4204-bb07-63038c6c6b75-kube-api-access-kcmgd\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.366018 master-0 kubenswrapper[17411]: W0223 13:26:18.365965 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0fc88e1_f67b_4e1d_921f_b861051f6558.slice/crio-1bb5484f2a1bb1c0c5834ffd1ba38f96a89c6d62053078eb59424cb588e44075 WatchSource:0}: Error finding container 1bb5484f2a1bb1c0c5834ffd1ba38f96a89c6d62053078eb59424cb588e44075: Status 404 returned error can't find the container with id 1bb5484f2a1bb1c0c5834ffd1ba38f96a89c6d62053078eb59424cb588e44075 Feb 23 13:26:18.391490 master-0 kubenswrapper[17411]: I0223 13:26:18.391077 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-oauth-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391490 master-0 kubenswrapper[17411]: I0223 13:26:18.391408 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391727 master-0 kubenswrapper[17411]: I0223 13:26:18.391532 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391727 master-0 kubenswrapper[17411]: I0223 13:26:18.391570 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-oauth-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391727 master-0 kubenswrapper[17411]: I0223 13:26:18.391613 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgr4\" (UniqueName: \"kubernetes.io/projected/d6adaba9-1008-4e85-96ae-ad92d485c98c-kube-api-access-twgr4\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391727 master-0 kubenswrapper[17411]: I0223 13:26:18.391648 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-trusted-ca-bundle\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.391727 master-0 kubenswrapper[17411]: I0223 13:26:18.391673 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-service-ca\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.415045 master-0 kubenswrapper[17411]: I0223 13:26:18.414903 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjqzm" event={"ID":"d0fc88e1-f67b-4e1d-921f-b861051f6558","Type":"ContainerStarted","Data":"1bb5484f2a1bb1c0c5834ffd1ba38f96a89c6d62053078eb59424cb588e44075"} Feb 23 13:26:18.416695 master-0 kubenswrapper[17411]: I0223 13:26:18.416668 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jl7tk" event={"ID":"c98ff941-bd8e-4080-905c-d6d0a800ac06","Type":"ContainerStarted","Data":"e69ec1ea95427036398f89b67b5ca7b8c9e00f43859ae45c85b84792372405b4"} Feb 23 13:26:18.416695 master-0 kubenswrapper[17411]: I0223 13:26:18.416696 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jl7tk" event={"ID":"c98ff941-bd8e-4080-905c-d6d0a800ac06","Type":"ContainerStarted","Data":"af9b3d6869cdbe3b2e97efb13b51101962743f82c35e971766e2a63159c2644b"} Feb 23 13:26:18.493456 master-0 kubenswrapper[17411]: I0223 13:26:18.493398 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-trusted-ca-bundle\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493456 master-0 kubenswrapper[17411]: I0223 13:26:18.493449 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-service-ca\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493709 master-0 kubenswrapper[17411]: I0223 13:26:18.493491 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-oauth-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493709 master-0 kubenswrapper[17411]: I0223 13:26:18.493552 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493709 master-0 kubenswrapper[17411]: I0223 13:26:18.493590 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493709 master-0 kubenswrapper[17411]: I0223 13:26:18.493611 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-oauth-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.493831 master-0 kubenswrapper[17411]: I0223 13:26:18.493712 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgr4\" (UniqueName: \"kubernetes.io/projected/d6adaba9-1008-4e85-96ae-ad92d485c98c-kube-api-access-twgr4\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.495515 master-0 kubenswrapper[17411]: I0223 13:26:18.495420 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-oauth-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.496789 master-0 kubenswrapper[17411]: I0223 13:26:18.496619 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-trusted-ca-bundle\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.499467 master-0 kubenswrapper[17411]: I0223 13:26:18.498440 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.500483 master-0 kubenswrapper[17411]: I0223 13:26:18.500184 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6adaba9-1008-4e85-96ae-ad92d485c98c-service-ca\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.501625 master-0 kubenswrapper[17411]: I0223 13:26:18.501583 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-serving-cert\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.507281 master-0 kubenswrapper[17411]: I0223 13:26:18.505283 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6adaba9-1008-4e85-96ae-ad92d485c98c-console-oauth-config\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.535377 master-0 kubenswrapper[17411]: I0223 13:26:18.535321 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgr4\" (UniqueName: \"kubernetes.io/projected/d6adaba9-1008-4e85-96ae-ad92d485c98c-kube-api-access-twgr4\") pod \"console-659c85987-hknhl\" (UID: \"d6adaba9-1008-4e85-96ae-ad92d485c98c\") " pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.705419 master-0 kubenswrapper[17411]: I0223 13:26:18.704826 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:18.820073 master-0 kubenswrapper[17411]: I0223 13:26:18.819972 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:18.823856 master-0 kubenswrapper[17411]: I0223 13:26:18.823824 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3655d6-aa4a-4204-bb07-63038c6c6b75-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-br2j9\" (UID: \"8a3655d6-aa4a-4204-bb07-63038c6c6b75\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:19.070321 master-0 kubenswrapper[17411]: I0223 13:26:19.070124 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" Feb 23 13:26:19.426403 master-0 kubenswrapper[17411]: I0223 13:26:19.426230 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-wc65n" event={"ID":"93dc0dd1-e259-4214-99c8-180fa7ac5ee8","Type":"ContainerStarted","Data":"c9b715a983a2de447af7eb4c4aecf5e47fbdcfbf81e3267d20a5185d5e6012c8"} Feb 23 13:26:19.426403 master-0 kubenswrapper[17411]: I0223 13:26:19.426385 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:19.567185 master-0 kubenswrapper[17411]: I0223 13:26:19.567128 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9"] Feb 23 13:26:19.580938 master-0 kubenswrapper[17411]: W0223 13:26:19.580891 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c6cc1a_e494_4590_b07b_99725d2511d6.slice/crio-026ee3b81c19e80316ece418a1789ed582726a5c10146e5e886353a49c5daec1 WatchSource:0}: Error finding container 026ee3b81c19e80316ece418a1789ed582726a5c10146e5e886353a49c5daec1: Status 404 returned error can't find the container with id 026ee3b81c19e80316ece418a1789ed582726a5c10146e5e886353a49c5daec1 Feb 23 13:26:19.582196 master-0 kubenswrapper[17411]: I0223 13:26:19.582128 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq"] Feb 23 13:26:19.881030 master-0 kubenswrapper[17411]: I0223 13:26:19.880949 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-wc65n" podStartSLOduration=3.589944187 podStartE2EDuration="4.880927125s" podCreationTimestamp="2026-02-23 13:26:15 +0000 UTC" firstStartedPulling="2026-02-23 13:26:16.83738667 +0000 UTC m=+1170.264893267" lastFinishedPulling="2026-02-23 13:26:18.128369608 +0000 UTC m=+1171.555876205" observedRunningTime="2026-02-23 13:26:19.859765586 +0000 UTC m=+1173.287272203" watchObservedRunningTime="2026-02-23 13:26:19.880927125 +0000 UTC m=+1173.308433722" Feb 23 13:26:19.891847 master-0 kubenswrapper[17411]: W0223 13:26:19.888131 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6adaba9_1008_4e85_96ae_ad92d485c98c.slice/crio-6d1fab0f3a525c7d9b7017108f1f852b1fd62bc2cf480ddd85ece24456de8016 WatchSource:0}: Error finding container 6d1fab0f3a525c7d9b7017108f1f852b1fd62bc2cf480ddd85ece24456de8016: Status 404 returned error can't find the container with id 6d1fab0f3a525c7d9b7017108f1f852b1fd62bc2cf480ddd85ece24456de8016 Feb 23 13:26:19.893411 master-0 kubenswrapper[17411]: I0223 13:26:19.893355 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-659c85987-hknhl"] Feb 23 13:26:20.118696 master-0 kubenswrapper[17411]: I0223 13:26:20.113818 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9"] Feb 23 13:26:20.178343 master-0 kubenswrapper[17411]: W0223 13:26:20.176613 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a3655d6_aa4a_4204_bb07_63038c6c6b75.slice/crio-b1fadd44149ef3a675af04f6b6203a7e2c28c39e845b02e9bd048fda889cf168 WatchSource:0}: Error finding container b1fadd44149ef3a675af04f6b6203a7e2c28c39e845b02e9bd048fda889cf168: Status 404 returned error can't find the container with id b1fadd44149ef3a675af04f6b6203a7e2c28c39e845b02e9bd048fda889cf168 Feb 23 13:26:20.435545 master-0 kubenswrapper[17411]: I0223 13:26:20.435399 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" event={"ID":"c767c86f-799e-4040-b693-80de292b69b4","Type":"ContainerStarted","Data":"85e8beb9ab487c3de3fa73e4d6e10e6c9ea0b7fbfe07ff94cb4a9c2ab8cde5ea"} Feb 23 13:26:20.436723 master-0 kubenswrapper[17411]: I0223 13:26:20.436690 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" event={"ID":"8a3655d6-aa4a-4204-bb07-63038c6c6b75","Type":"ContainerStarted","Data":"b1fadd44149ef3a675af04f6b6203a7e2c28c39e845b02e9bd048fda889cf168"} Feb 23 13:26:20.438314 master-0 kubenswrapper[17411]: I0223 13:26:20.438291 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" event={"ID":"b1c6cc1a-e494-4590-b07b-99725d2511d6","Type":"ContainerStarted","Data":"026ee3b81c19e80316ece418a1789ed582726a5c10146e5e886353a49c5daec1"} Feb 23 13:26:20.441391 master-0 kubenswrapper[17411]: I0223 13:26:20.441333 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-659c85987-hknhl" event={"ID":"d6adaba9-1008-4e85-96ae-ad92d485c98c","Type":"ContainerStarted","Data":"84825390fa408cb4a3920ac67869ab575f10bb5ee057d3dcbcf76ecccbdf7359"} Feb 23 13:26:20.441460 master-0 kubenswrapper[17411]: I0223 13:26:20.441407 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-659c85987-hknhl" event={"ID":"d6adaba9-1008-4e85-96ae-ad92d485c98c","Type":"ContainerStarted","Data":"6d1fab0f3a525c7d9b7017108f1f852b1fd62bc2cf480ddd85ece24456de8016"} Feb 23 13:26:20.467416 master-0 kubenswrapper[17411]: I0223 13:26:20.467336 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-659c85987-hknhl" podStartSLOduration=2.467313213 podStartE2EDuration="2.467313213s" podCreationTimestamp="2026-02-23 13:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:20.462707481 +0000 UTC m=+1173.890214098" watchObservedRunningTime="2026-02-23 13:26:20.467313213 +0000 UTC m=+1173.894819810" Feb 23 13:26:21.449843 master-0 kubenswrapper[17411]: I0223 13:26:21.449754 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jl7tk" event={"ID":"c98ff941-bd8e-4080-905c-d6d0a800ac06","Type":"ContainerStarted","Data":"08821bc4343c75cecfe928cc5c4449b7910964b03b01aa35ebfa4b0d9403ee2d"} Feb 23 13:26:21.482567 master-0 kubenswrapper[17411]: I0223 13:26:21.482445 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jl7tk" podStartSLOduration=4.257569342 podStartE2EDuration="6.482412905s" podCreationTimestamp="2026-02-23 13:26:15 +0000 UTC" firstStartedPulling="2026-02-23 13:26:17.96332074 +0000 UTC m=+1171.390827337" lastFinishedPulling="2026-02-23 13:26:20.188164303 +0000 UTC m=+1173.615670900" observedRunningTime="2026-02-23 13:26:21.476058062 +0000 UTC m=+1174.903564669" watchObservedRunningTime="2026-02-23 13:26:21.482412905 +0000 UTC m=+1174.909919502" Feb 23 13:26:22.457318 master-0 kubenswrapper[17411]: I0223 13:26:22.457221 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jl7tk" Feb 23 13:26:23.468125 master-0 kubenswrapper[17411]: I0223 13:26:23.468047 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" event={"ID":"c767c86f-799e-4040-b693-80de292b69b4","Type":"ContainerStarted","Data":"f38cf4598190d94a8f414cf9c6ce5db5cd4c896e1eefe9c7a1198d44e1306006"} Feb 23 13:26:23.468838 master-0 kubenswrapper[17411]: I0223 13:26:23.468195 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:23.470590 master-0 kubenswrapper[17411]: I0223 13:26:23.470425 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" event={"ID":"b1c6cc1a-e494-4590-b07b-99725d2511d6","Type":"ContainerStarted","Data":"4d7478248b3c7b1130d09de2542a30c1aa2a1b39fa11cd4592375d8906eb30aa"} Feb 23 13:26:23.471925 master-0 kubenswrapper[17411]: I0223 13:26:23.471822 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" event={"ID":"844d9bcd-edea-48a2-b38c-38669b47ed0b","Type":"ContainerStarted","Data":"227bd796653a0e8aa6b0d2c8b1b9698a3a331770d374ecfb537dbad6db89c434"} Feb 23 13:26:23.472147 master-0 kubenswrapper[17411]: I0223 13:26:23.472048 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:23.473148 master-0 kubenswrapper[17411]: I0223 13:26:23.473123 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjqzm" event={"ID":"d0fc88e1-f67b-4e1d-921f-b861051f6558","Type":"ContainerStarted","Data":"befd7ac2cf638426aeec3693a56de0ba921fa82273ca73cc29cbf7de58e3bda0"} Feb 23 13:26:23.473329 master-0 kubenswrapper[17411]: I0223 13:26:23.473291 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:23.474922 master-0 kubenswrapper[17411]: I0223 13:26:23.474880 17411 generic.go:334] "Generic (PLEG): container finished" podID="f31917ab-7d72-4bb3-8378-406df839219d" containerID="024d6f38dbc6c0aa967a0e539c642519decb6efc70900d07953dfd471bbcb921" exitCode=0 Feb 23 13:26:23.475717 master-0 kubenswrapper[17411]: I0223 13:26:23.475671 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerDied","Data":"024d6f38dbc6c0aa967a0e539c642519decb6efc70900d07953dfd471bbcb921"} Feb 23 13:26:23.502437 master-0 kubenswrapper[17411]: I0223 13:26:23.502331 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" podStartSLOduration=2.9474697499999998 podStartE2EDuration="6.502313813s" podCreationTimestamp="2026-02-23 13:26:17 +0000 UTC" firstStartedPulling="2026-02-23 13:26:19.572506143 +0000 UTC m=+1173.000012740" lastFinishedPulling="2026-02-23 13:26:23.127350206 +0000 UTC m=+1176.554856803" observedRunningTime="2026-02-23 13:26:23.500620824 +0000 UTC m=+1176.928127421" watchObservedRunningTime="2026-02-23 13:26:23.502313813 +0000 UTC m=+1176.929820410" Feb 23 13:26:23.536725 master-0 kubenswrapper[17411]: I0223 13:26:23.532532 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" podStartSLOduration=1.8963484369999999 podStartE2EDuration="8.532510452s" podCreationTimestamp="2026-02-23 13:26:15 +0000 UTC" firstStartedPulling="2026-02-23 13:26:16.483375586 +0000 UTC m=+1169.910882183" lastFinishedPulling="2026-02-23 13:26:23.119537591 +0000 UTC m=+1176.547044198" observedRunningTime="2026-02-23 13:26:23.523123261 +0000 UTC m=+1176.950629858" watchObservedRunningTime="2026-02-23 13:26:23.532510452 +0000 UTC m=+1176.960017049" Feb 23 13:26:23.545748 master-0 kubenswrapper[17411]: I0223 13:26:23.545664 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rjqzm" podStartSLOduration=1.796137708 podStartE2EDuration="6.545643649s" podCreationTimestamp="2026-02-23 13:26:17 +0000 UTC" firstStartedPulling="2026-02-23 13:26:18.369589708 +0000 UTC m=+1171.797096305" lastFinishedPulling="2026-02-23 13:26:23.119095649 +0000 UTC m=+1176.546602246" observedRunningTime="2026-02-23 13:26:23.541621934 +0000 UTC m=+1176.969128561" watchObservedRunningTime="2026-02-23 13:26:23.545643649 +0000 UTC m=+1176.973150246" Feb 23 13:26:24.490902 master-0 kubenswrapper[17411]: I0223 13:26:24.490851 17411 generic.go:334] "Generic (PLEG): container finished" podID="f31917ab-7d72-4bb3-8378-406df839219d" containerID="17fb218bb47c97e8303549c48ed0f3f7c9dcbc346b66dd15930ea6bb533863f1" exitCode=0 Feb 23 13:26:24.491554 master-0 kubenswrapper[17411]: I0223 13:26:24.491010 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerDied","Data":"17fb218bb47c97e8303549c48ed0f3f7c9dcbc346b66dd15930ea6bb533863f1"} Feb 23 13:26:24.493512 master-0 kubenswrapper[17411]: I0223 13:26:24.493321 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" event={"ID":"b1c6cc1a-e494-4590-b07b-99725d2511d6","Type":"ContainerStarted","Data":"8a709d12bf7d4fbfc3e79f14d8414eae85ad2ddc7f8b61656ed6dee565dadb1f"} Feb 23 13:26:24.540823 master-0 kubenswrapper[17411]: I0223 13:26:24.540347 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8lbm9" podStartSLOduration=4.000950755 podStartE2EDuration="7.540327013s" podCreationTimestamp="2026-02-23 13:26:17 +0000 UTC" firstStartedPulling="2026-02-23 13:26:19.583839149 +0000 UTC m=+1173.011345746" lastFinishedPulling="2026-02-23 13:26:23.123215407 +0000 UTC m=+1176.550722004" observedRunningTime="2026-02-23 13:26:24.537320846 +0000 UTC m=+1177.964827463" watchObservedRunningTime="2026-02-23 13:26:24.540327013 +0000 UTC m=+1177.967833610" Feb 23 13:26:25.504508 master-0 kubenswrapper[17411]: I0223 13:26:25.504427 17411 generic.go:334] "Generic (PLEG): container finished" podID="f31917ab-7d72-4bb3-8378-406df839219d" containerID="4b944f5c747ee347622c0118fcff1aa0c74fb63ab43901219b9882c8ad7cf842" exitCode=0 Feb 23 13:26:25.505126 master-0 kubenswrapper[17411]: I0223 13:26:25.504576 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerDied","Data":"4b944f5c747ee347622c0118fcff1aa0c74fb63ab43901219b9882c8ad7cf842"} Feb 23 13:26:26.189326 master-0 kubenswrapper[17411]: I0223 13:26:26.189271 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-wc65n" Feb 23 13:26:26.519026 master-0 kubenswrapper[17411]: I0223 13:26:26.518966 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"c39a051796688f7a6684228db6e3d705b3dd0686b123be930c049b64a410693a"} Feb 23 13:26:26.519026 master-0 kubenswrapper[17411]: I0223 13:26:26.519016 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"500231f0b4f5f264d2d48341ac085057471bfd060c8738b8e7f0e517aec3ff7a"} Feb 23 13:26:26.519026 master-0 kubenswrapper[17411]: I0223 13:26:26.519031 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"08b926304f9f62db3aa4a34e0954db400de5dd95e7929d6b208917f8b90f8d77"} Feb 23 13:26:26.527760 master-0 kubenswrapper[17411]: I0223 13:26:26.520722 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" event={"ID":"8a3655d6-aa4a-4204-bb07-63038c6c6b75","Type":"ContainerStarted","Data":"1b74f3fac0ea8699b85670c31249f7b2696561543082afac9c9eb4548a9dc0b9"} Feb 23 13:26:26.549894 master-0 kubenswrapper[17411]: I0223 13:26:26.549829 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-br2j9" podStartSLOduration=2.442230211 podStartE2EDuration="8.54978949s" podCreationTimestamp="2026-02-23 13:26:18 +0000 UTC" firstStartedPulling="2026-02-23 13:26:20.192721404 +0000 UTC m=+1173.620228001" lastFinishedPulling="2026-02-23 13:26:26.300280663 +0000 UTC m=+1179.727787280" observedRunningTime="2026-02-23 13:26:26.539383831 +0000 UTC m=+1179.966890438" watchObservedRunningTime="2026-02-23 13:26:26.54978949 +0000 UTC m=+1179.977296087" Feb 23 13:26:27.534508 master-0 kubenswrapper[17411]: I0223 13:26:27.534427 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"4390ccf1beeb0b0b43b398c54cb07c7ca80a607bc188e4f6332e43013f668276"} Feb 23 13:26:27.534508 master-0 kubenswrapper[17411]: I0223 13:26:27.534499 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"d6411f5da6663f3028e9b026e7c4de9a11bdc773086c014aca969f6bd039b52f"} Feb 23 13:26:27.534508 master-0 kubenswrapper[17411]: I0223 13:26:27.534516 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9v4qh" event={"ID":"f31917ab-7d72-4bb3-8378-406df839219d","Type":"ContainerStarted","Data":"47fdb7dab11209ce3c3242629ea1485b1a6d8a63bf6f0a5d2e296785e9d29868"} Feb 23 13:26:27.580909 master-0 kubenswrapper[17411]: I0223 13:26:27.580748 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9v4qh" podStartSLOduration=5.562261895 podStartE2EDuration="12.580714537s" podCreationTimestamp="2026-02-23 13:26:15 +0000 UTC" firstStartedPulling="2026-02-23 13:26:16.103757386 +0000 UTC m=+1169.531263983" lastFinishedPulling="2026-02-23 13:26:23.122210028 +0000 UTC m=+1176.549716625" observedRunningTime="2026-02-23 13:26:27.573456298 +0000 UTC m=+1181.000962895" watchObservedRunningTime="2026-02-23 13:26:27.580714537 +0000 UTC m=+1181.008221174" Feb 23 13:26:27.592155 master-0 kubenswrapper[17411]: I0223 13:26:27.592089 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jl7tk" Feb 23 13:26:28.334143 master-0 kubenswrapper[17411]: I0223 13:26:28.334010 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rjqzm" Feb 23 13:26:28.544533 master-0 kubenswrapper[17411]: I0223 13:26:28.544471 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:28.706021 master-0 kubenswrapper[17411]: I0223 13:26:28.705923 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:28.706021 master-0 kubenswrapper[17411]: I0223 13:26:28.705994 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:28.714235 master-0 kubenswrapper[17411]: I0223 13:26:28.714167 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:29.555976 master-0 kubenswrapper[17411]: I0223 13:26:29.555781 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-659c85987-hknhl" Feb 23 13:26:29.681026 master-0 kubenswrapper[17411]: I0223 13:26:29.680942 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:26:30.963578 master-0 kubenswrapper[17411]: I0223 13:26:30.963477 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:31.016667 master-0 kubenswrapper[17411]: I0223 13:26:31.016586 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:35.968960 master-0 kubenswrapper[17411]: I0223 13:26:35.968837 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9v4qh" Feb 23 13:26:35.993916 master-0 kubenswrapper[17411]: I0223 13:26:35.993810 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jglz2" Feb 23 13:26:38.267846 master-0 kubenswrapper[17411]: I0223 13:26:38.267771 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-phsnq" Feb 23 13:26:42.969693 master-0 kubenswrapper[17411]: I0223 13:26:42.969626 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-skhlg"] Feb 23 13:26:42.971462 master-0 kubenswrapper[17411]: I0223 13:26:42.971421 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:42.974357 master-0 kubenswrapper[17411]: I0223 13:26:42.974300 17411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 23 13:26:42.979474 master-0 kubenswrapper[17411]: I0223 13:26:42.979413 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-skhlg"] Feb 23 13:26:43.017860 master-0 kubenswrapper[17411]: I0223 13:26:43.017768 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-device-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018266 master-0 kubenswrapper[17411]: I0223 13:26:43.017925 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-lvmd-config\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018266 master-0 kubenswrapper[17411]: I0223 13:26:43.018066 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-sys\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018266 master-0 kubenswrapper[17411]: I0223 13:26:43.018190 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-node-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018477 master-0 kubenswrapper[17411]: I0223 13:26:43.018320 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-run-udev\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018584 master-0 kubenswrapper[17411]: I0223 13:26:43.018556 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-file-lock-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018676 master-0 kubenswrapper[17411]: I0223 13:26:43.018646 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvxm\" (UniqueName: \"kubernetes.io/projected/718394ea-bd0b-441c-90d2-f20b4a8a92d5-kube-api-access-hkvxm\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018808 master-0 kubenswrapper[17411]: I0223 13:26:43.018765 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/718394ea-bd0b-441c-90d2-f20b4a8a92d5-metrics-cert\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018896 master-0 kubenswrapper[17411]: I0223 13:26:43.018808 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-csi-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.018896 master-0 kubenswrapper[17411]: I0223 13:26:43.018881 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-pod-volumes-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.019023 master-0 kubenswrapper[17411]: I0223 13:26:43.018975 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-registration-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121156 master-0 kubenswrapper[17411]: I0223 13:26:43.121082 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-registration-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121390 master-0 kubenswrapper[17411]: I0223 13:26:43.121218 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-device-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121390 master-0 kubenswrapper[17411]: I0223 13:26:43.121280 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-lvmd-config\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121390 master-0 kubenswrapper[17411]: I0223 13:26:43.121317 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-sys\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121390 master-0 kubenswrapper[17411]: I0223 13:26:43.121353 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-node-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121390 master-0 kubenswrapper[17411]: I0223 13:26:43.121378 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-run-udev\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121553 master-0 kubenswrapper[17411]: I0223 13:26:43.121436 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-file-lock-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121553 master-0 kubenswrapper[17411]: I0223 13:26:43.121458 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvxm\" (UniqueName: \"kubernetes.io/projected/718394ea-bd0b-441c-90d2-f20b4a8a92d5-kube-api-access-hkvxm\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121553 master-0 kubenswrapper[17411]: I0223 13:26:43.121497 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/718394ea-bd0b-441c-90d2-f20b4a8a92d5-metrics-cert\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121553 master-0 kubenswrapper[17411]: I0223 13:26:43.121513 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-csi-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121553 master-0 kubenswrapper[17411]: I0223 13:26:43.121538 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-pod-volumes-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121707 master-0 kubenswrapper[17411]: I0223 13:26:43.121697 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-pod-volumes-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121810 master-0 kubenswrapper[17411]: I0223 13:26:43.121776 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-registration-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.121845 master-0 kubenswrapper[17411]: I0223 13:26:43.121811 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-device-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122043 master-0 kubenswrapper[17411]: I0223 13:26:43.121986 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-lvmd-config\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122268 master-0 kubenswrapper[17411]: I0223 13:26:43.122220 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-file-lock-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122368 master-0 kubenswrapper[17411]: I0223 13:26:43.122330 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-sys\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122368 master-0 kubenswrapper[17411]: I0223 13:26:43.122300 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-node-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122684 master-0 kubenswrapper[17411]: I0223 13:26:43.122646 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-csi-plugin-dir\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.122738 master-0 kubenswrapper[17411]: I0223 13:26:43.122700 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/718394ea-bd0b-441c-90d2-f20b4a8a92d5-run-udev\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.125648 master-0 kubenswrapper[17411]: I0223 13:26:43.125628 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/718394ea-bd0b-441c-90d2-f20b4a8a92d5-metrics-cert\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.138337 master-0 kubenswrapper[17411]: I0223 13:26:43.138293 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvxm\" (UniqueName: \"kubernetes.io/projected/718394ea-bd0b-441c-90d2-f20b4a8a92d5-kube-api-access-hkvxm\") pod \"vg-manager-skhlg\" (UID: \"718394ea-bd0b-441c-90d2-f20b4a8a92d5\") " pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.288761 master-0 kubenswrapper[17411]: I0223 13:26:43.288606 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:43.780846 master-0 kubenswrapper[17411]: I0223 13:26:43.780791 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-skhlg"] Feb 23 13:26:43.783726 master-0 kubenswrapper[17411]: W0223 13:26:43.783673 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718394ea_bd0b_441c_90d2_f20b4a8a92d5.slice/crio-6a70775164d0e3eb2c642cb6aceb65723a3b8d0ded4a357c5329e2da5aafe22d WatchSource:0}: Error finding container 6a70775164d0e3eb2c642cb6aceb65723a3b8d0ded4a357c5329e2da5aafe22d: Status 404 returned error can't find the container with id 6a70775164d0e3eb2c642cb6aceb65723a3b8d0ded4a357c5329e2da5aafe22d Feb 23 13:26:44.722437 master-0 kubenswrapper[17411]: I0223 13:26:44.722312 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-skhlg" event={"ID":"718394ea-bd0b-441c-90d2-f20b4a8a92d5","Type":"ContainerStarted","Data":"898ed6c5bddc92055be3f7d39f95493792a924c89783089a5538d9761be8efa4"} Feb 23 13:26:44.722437 master-0 kubenswrapper[17411]: I0223 13:26:44.722410 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-skhlg" event={"ID":"718394ea-bd0b-441c-90d2-f20b4a8a92d5","Type":"ContainerStarted","Data":"6a70775164d0e3eb2c642cb6aceb65723a3b8d0ded4a357c5329e2da5aafe22d"} Feb 23 13:26:44.749447 master-0 kubenswrapper[17411]: I0223 13:26:44.749343 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-skhlg" podStartSLOduration=2.749321895 podStartE2EDuration="2.749321895s" podCreationTimestamp="2026-02-23 13:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:44.746192956 +0000 UTC m=+1198.173699563" watchObservedRunningTime="2026-02-23 13:26:44.749321895 +0000 UTC m=+1198.176828492" Feb 23 13:26:46.748346 master-0 kubenswrapper[17411]: I0223 13:26:46.748211 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-skhlg_718394ea-bd0b-441c-90d2-f20b4a8a92d5/vg-manager/0.log" Feb 23 13:26:46.748346 master-0 kubenswrapper[17411]: I0223 13:26:46.748333 17411 generic.go:334] "Generic (PLEG): container finished" podID="718394ea-bd0b-441c-90d2-f20b4a8a92d5" containerID="898ed6c5bddc92055be3f7d39f95493792a924c89783089a5538d9761be8efa4" exitCode=1 Feb 23 13:26:46.749363 master-0 kubenswrapper[17411]: I0223 13:26:46.748383 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-skhlg" event={"ID":"718394ea-bd0b-441c-90d2-f20b4a8a92d5","Type":"ContainerDied","Data":"898ed6c5bddc92055be3f7d39f95493792a924c89783089a5538d9761be8efa4"} Feb 23 13:26:46.749363 master-0 kubenswrapper[17411]: I0223 13:26:46.749314 17411 scope.go:117] "RemoveContainer" containerID="898ed6c5bddc92055be3f7d39f95493792a924c89783089a5538d9761be8efa4" Feb 23 13:26:47.119676 master-0 kubenswrapper[17411]: I0223 13:26:47.119114 17411 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 23 13:26:47.768863 master-0 kubenswrapper[17411]: I0223 13:26:47.768785 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-skhlg_718394ea-bd0b-441c-90d2-f20b4a8a92d5/vg-manager/0.log" Feb 23 13:26:47.769708 master-0 kubenswrapper[17411]: I0223 13:26:47.768876 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-skhlg" event={"ID":"718394ea-bd0b-441c-90d2-f20b4a8a92d5","Type":"ContainerStarted","Data":"74b216d692b80a142bae1ab28920141868ee1759e8821421f1460ef0a84725d6"} Feb 23 13:26:47.889323 master-0 kubenswrapper[17411]: I0223 13:26:47.889112 17411 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-23T13:26:47.119166816Z","Handler":null,"Name":""} Feb 23 13:26:47.891564 master-0 kubenswrapper[17411]: I0223 13:26:47.891539 17411 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 23 13:26:47.891648 master-0 kubenswrapper[17411]: I0223 13:26:47.891579 17411 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 23 13:26:53.289065 master-0 kubenswrapper[17411]: I0223 13:26:53.288970 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:53.292619 master-0 kubenswrapper[17411]: I0223 13:26:53.292519 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:53.838737 master-0 kubenswrapper[17411]: I0223 13:26:53.838645 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:53.840237 master-0 kubenswrapper[17411]: I0223 13:26:53.840182 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-skhlg" Feb 23 13:26:54.747229 master-0 kubenswrapper[17411]: I0223 13:26:54.747141 17411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cdf5bf6fc-ws9gr" podUID="1c0c0578-9329-492f-9453-9503d4007aa3" containerName="console" containerID="cri-o://70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18" gracePeriod=15 Feb 23 13:26:55.283324 master-0 kubenswrapper[17411]: I0223 13:26:55.283160 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cdf5bf6fc-ws9gr_1c0c0578-9329-492f-9453-9503d4007aa3/console/0.log" Feb 23 13:26:55.283324 master-0 kubenswrapper[17411]: I0223 13:26:55.283269 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:26:55.285924 master-0 kubenswrapper[17411]: I0223 13:26:55.285725 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.285924 master-0 kubenswrapper[17411]: I0223 13:26:55.285757 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.285924 master-0 kubenswrapper[17411]: I0223 13:26:55.285808 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.286098 master-0 kubenswrapper[17411]: I0223 13:26:55.285959 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbcxb\" (UniqueName: \"kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.286098 master-0 kubenswrapper[17411]: I0223 13:26:55.285981 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.286098 master-0 kubenswrapper[17411]: I0223 13:26:55.286039 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.286098 master-0 kubenswrapper[17411]: I0223 13:26:55.286073 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle\") pod \"1c0c0578-9329-492f-9453-9503d4007aa3\" (UID: \"1c0c0578-9329-492f-9453-9503d4007aa3\") " Feb 23 13:26:55.286690 master-0 kubenswrapper[17411]: I0223 13:26:55.286643 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config" (OuterVolumeSpecName: "console-config") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:55.286804 master-0 kubenswrapper[17411]: I0223 13:26:55.286771 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:55.286929 master-0 kubenswrapper[17411]: I0223 13:26:55.286854 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:55.287241 master-0 kubenswrapper[17411]: I0223 13:26:55.287193 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca" (OuterVolumeSpecName: "service-ca") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:55.289435 master-0 kubenswrapper[17411]: I0223 13:26:55.289390 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:55.289652 master-0 kubenswrapper[17411]: I0223 13:26:55.289618 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb" (OuterVolumeSpecName: "kube-api-access-nbcxb") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "kube-api-access-nbcxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:55.290053 master-0 kubenswrapper[17411]: I0223 13:26:55.290013 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1c0c0578-9329-492f-9453-9503d4007aa3" (UID: "1c0c0578-9329-492f-9453-9503d4007aa3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:55.388220 master-0 kubenswrapper[17411]: I0223 13:26:55.388141 17411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388220 master-0 kubenswrapper[17411]: I0223 13:26:55.388196 17411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0c0578-9329-492f-9453-9503d4007aa3-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388220 master-0 kubenswrapper[17411]: I0223 13:26:55.388207 17411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388220 master-0 kubenswrapper[17411]: I0223 13:26:55.388216 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbcxb\" (UniqueName: \"kubernetes.io/projected/1c0c0578-9329-492f-9453-9503d4007aa3-kube-api-access-nbcxb\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388220 master-0 kubenswrapper[17411]: I0223 13:26:55.388226 17411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-console-config\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388651 master-0 kubenswrapper[17411]: I0223 13:26:55.388234 17411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.388651 master-0 kubenswrapper[17411]: I0223 13:26:55.388294 17411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c0c0578-9329-492f-9453-9503d4007aa3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 23 13:26:55.862740 master-0 kubenswrapper[17411]: I0223 13:26:55.862672 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cdf5bf6fc-ws9gr_1c0c0578-9329-492f-9453-9503d4007aa3/console/0.log" Feb 23 13:26:55.862740 master-0 kubenswrapper[17411]: I0223 13:26:55.862733 17411 generic.go:334] "Generic (PLEG): container finished" podID="1c0c0578-9329-492f-9453-9503d4007aa3" containerID="70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18" exitCode=2 Feb 23 13:26:55.863449 master-0 kubenswrapper[17411]: I0223 13:26:55.862805 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdf5bf6fc-ws9gr" event={"ID":"1c0c0578-9329-492f-9453-9503d4007aa3","Type":"ContainerDied","Data":"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18"} Feb 23 13:26:55.863449 master-0 kubenswrapper[17411]: I0223 13:26:55.862838 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdf5bf6fc-ws9gr" Feb 23 13:26:55.863449 master-0 kubenswrapper[17411]: I0223 13:26:55.862865 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdf5bf6fc-ws9gr" event={"ID":"1c0c0578-9329-492f-9453-9503d4007aa3","Type":"ContainerDied","Data":"7c1bc202949f7cae9f66b14a83c9bff346d77ad8f376cd40e1db2449cd741fc1"} Feb 23 13:26:55.863449 master-0 kubenswrapper[17411]: I0223 13:26:55.862893 17411 scope.go:117] "RemoveContainer" containerID="70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18" Feb 23 13:26:55.884523 master-0 kubenswrapper[17411]: I0223 13:26:55.884478 17411 scope.go:117] "RemoveContainer" containerID="70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18" Feb 23 13:26:55.884960 master-0 kubenswrapper[17411]: E0223 13:26:55.884922 17411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18\": container with ID starting with 70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18 not found: ID does not exist" containerID="70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18" Feb 23 13:26:55.885032 master-0 kubenswrapper[17411]: I0223 13:26:55.884957 17411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18"} err="failed to get container status \"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18\": rpc error: code = NotFound desc = could not find container \"70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18\": container with ID starting with 70ca4e064da077550372959a858e94ce6509e7b6748c60fdf0490e90894e7d18 not found: ID does not exist" Feb 23 13:26:55.915630 master-0 kubenswrapper[17411]: I0223 13:26:55.915563 17411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:26:55.927790 master-0 kubenswrapper[17411]: I0223 13:26:55.927717 17411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cdf5bf6fc-ws9gr"] Feb 23 13:26:56.052214 master-0 kubenswrapper[17411]: I0223 13:26:56.052155 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w928r"] Feb 23 13:26:56.052610 master-0 kubenswrapper[17411]: E0223 13:26:56.052588 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0c0578-9329-492f-9453-9503d4007aa3" containerName="console" Feb 23 13:26:56.052665 master-0 kubenswrapper[17411]: I0223 13:26:56.052610 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0c0578-9329-492f-9453-9503d4007aa3" containerName="console" Feb 23 13:26:56.052868 master-0 kubenswrapper[17411]: I0223 13:26:56.052841 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0c0578-9329-492f-9453-9503d4007aa3" containerName="console" Feb 23 13:26:56.053458 master-0 kubenswrapper[17411]: I0223 13:26:56.053435 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:26:56.056334 master-0 kubenswrapper[17411]: I0223 13:26:56.056270 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 13:26:56.056700 master-0 kubenswrapper[17411]: I0223 13:26:56.056659 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 13:26:56.068224 master-0 kubenswrapper[17411]: I0223 13:26:56.068168 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w928r"] Feb 23 13:26:56.113703 master-0 kubenswrapper[17411]: I0223 13:26:56.113586 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfz4\" (UniqueName: \"kubernetes.io/projected/9610c5fc-e8ea-44d9-b931-049c2ac5828b-kube-api-access-tmfz4\") pod \"openstack-operator-index-w928r\" (UID: \"9610c5fc-e8ea-44d9-b931-049c2ac5828b\") " pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:26:56.218271 master-0 kubenswrapper[17411]: I0223 13:26:56.217862 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfz4\" (UniqueName: \"kubernetes.io/projected/9610c5fc-e8ea-44d9-b931-049c2ac5828b-kube-api-access-tmfz4\") pod \"openstack-operator-index-w928r\" (UID: \"9610c5fc-e8ea-44d9-b931-049c2ac5828b\") " pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:26:56.254270 master-0 kubenswrapper[17411]: I0223 13:26:56.252765 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfz4\" (UniqueName: \"kubernetes.io/projected/9610c5fc-e8ea-44d9-b931-049c2ac5828b-kube-api-access-tmfz4\") pod \"openstack-operator-index-w928r\" (UID: \"9610c5fc-e8ea-44d9-b931-049c2ac5828b\") " pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:26:56.373081 master-0 kubenswrapper[17411]: I0223 13:26:56.372916 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:26:56.892653 master-0 kubenswrapper[17411]: I0223 13:26:56.892535 17411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0c0578-9329-492f-9453-9503d4007aa3" path="/var/lib/kubelet/pods/1c0c0578-9329-492f-9453-9503d4007aa3/volumes" Feb 23 13:26:57.631789 master-0 kubenswrapper[17411]: I0223 13:26:57.631689 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w928r"] Feb 23 13:26:57.646398 master-0 kubenswrapper[17411]: W0223 13:26:57.646317 17411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9610c5fc_e8ea_44d9_b931_049c2ac5828b.slice/crio-bb8fce69d8a341346c519f8b4dc84bccb7029cd97b7da5cd8ed6e151afb02478 WatchSource:0}: Error finding container bb8fce69d8a341346c519f8b4dc84bccb7029cd97b7da5cd8ed6e151afb02478: Status 404 returned error can't find the container with id bb8fce69d8a341346c519f8b4dc84bccb7029cd97b7da5cd8ed6e151afb02478 Feb 23 13:26:57.649363 master-0 kubenswrapper[17411]: I0223 13:26:57.649304 17411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:26:57.888437 master-0 kubenswrapper[17411]: I0223 13:26:57.888140 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w928r" event={"ID":"9610c5fc-e8ea-44d9-b931-049c2ac5828b","Type":"ContainerStarted","Data":"bb8fce69d8a341346c519f8b4dc84bccb7029cd97b7da5cd8ed6e151afb02478"} Feb 23 13:26:59.914216 master-0 kubenswrapper[17411]: I0223 13:26:59.914113 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w928r" event={"ID":"9610c5fc-e8ea-44d9-b931-049c2ac5828b","Type":"ContainerStarted","Data":"13263fd9c1eea53f2ff52bd304feb096297498498f4292a021d69bc0af4f398b"} Feb 23 13:26:59.944683 master-0 kubenswrapper[17411]: I0223 13:26:59.944230 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w928r" podStartSLOduration=2.495581472 podStartE2EDuration="3.944204664s" podCreationTimestamp="2026-02-23 13:26:56 +0000 UTC" firstStartedPulling="2026-02-23 13:26:57.649270049 +0000 UTC m=+1211.076776646" lastFinishedPulling="2026-02-23 13:26:59.097893231 +0000 UTC m=+1212.525399838" observedRunningTime="2026-02-23 13:26:59.937205404 +0000 UTC m=+1213.364712041" watchObservedRunningTime="2026-02-23 13:26:59.944204664 +0000 UTC m=+1213.371711271" Feb 23 13:27:06.374383 master-0 kubenswrapper[17411]: I0223 13:27:06.374283 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:27:06.374383 master-0 kubenswrapper[17411]: I0223 13:27:06.374372 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:27:06.415219 master-0 kubenswrapper[17411]: I0223 13:27:06.415145 17411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:27:07.027720 master-0 kubenswrapper[17411]: I0223 13:27:07.027612 17411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w928r" Feb 23 13:30:00.182800 master-0 kubenswrapper[17411]: I0223 13:30:00.182695 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm"] Feb 23 13:30:00.184270 master-0 kubenswrapper[17411]: I0223 13:30:00.184204 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.188071 master-0 kubenswrapper[17411]: I0223 13:30:00.186843 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:30:00.204266 master-0 kubenswrapper[17411]: I0223 13:30:00.204207 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm"] Feb 23 13:30:00.246418 master-0 kubenswrapper[17411]: I0223 13:30:00.246339 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.246418 master-0 kubenswrapper[17411]: I0223 13:30:00.246415 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7fm\" (UniqueName: \"kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.246811 master-0 kubenswrapper[17411]: I0223 13:30:00.246576 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.349668 master-0 kubenswrapper[17411]: I0223 13:30:00.349553 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.349668 master-0 kubenswrapper[17411]: I0223 13:30:00.349660 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7fm\" (UniqueName: \"kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.350006 master-0 kubenswrapper[17411]: I0223 13:30:00.349725 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.350999 master-0 kubenswrapper[17411]: I0223 13:30:00.350947 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.354062 master-0 kubenswrapper[17411]: I0223 13:30:00.354027 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.369862 master-0 kubenswrapper[17411]: I0223 13:30:00.369794 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7fm\" (UniqueName: \"kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm\") pod \"collect-profiles-29530890-njzwm\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.520499 master-0 kubenswrapper[17411]: I0223 13:30:00.520416 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:00.940050 master-0 kubenswrapper[17411]: I0223 13:30:00.940006 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm"] Feb 23 13:30:01.955097 master-0 kubenswrapper[17411]: I0223 13:30:01.955031 17411 generic.go:334] "Generic (PLEG): container finished" podID="2ad3c7ba-c8c5-41b2-9265-629bcec46589" containerID="fd1dfc82bad33b823ae45e48df853be06345a99a7d4ab88cbd48ce4bbbda6e3d" exitCode=0 Feb 23 13:30:01.956013 master-0 kubenswrapper[17411]: I0223 13:30:01.955118 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" event={"ID":"2ad3c7ba-c8c5-41b2-9265-629bcec46589","Type":"ContainerDied","Data":"fd1dfc82bad33b823ae45e48df853be06345a99a7d4ab88cbd48ce4bbbda6e3d"} Feb 23 13:30:01.956013 master-0 kubenswrapper[17411]: I0223 13:30:01.955189 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" event={"ID":"2ad3c7ba-c8c5-41b2-9265-629bcec46589","Type":"ContainerStarted","Data":"69a71bf461bd58c09cb07395e975467d93fdec4ae79dbdb288d320f36f63bd32"} Feb 23 13:30:03.291194 master-0 kubenswrapper[17411]: I0223 13:30:03.291136 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:30:03.415016 master-0 kubenswrapper[17411]: I0223 13:30:03.414951 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume\") pod \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " Feb 23 13:30:03.415336 master-0 kubenswrapper[17411]: I0223 13:30:03.415084 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj7fm\" (UniqueName: \"kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm\") pod \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " Feb 23 13:30:03.415336 master-0 kubenswrapper[17411]: I0223 13:30:03.415157 17411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume\") pod \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\" (UID: \"2ad3c7ba-c8c5-41b2-9265-629bcec46589\") " Feb 23 13:30:03.415737 master-0 kubenswrapper[17411]: I0223 13:30:03.415699 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ad3c7ba-c8c5-41b2-9265-629bcec46589" (UID: "2ad3c7ba-c8c5-41b2-9265-629bcec46589"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:03.418028 master-0 kubenswrapper[17411]: I0223 13:30:03.417955 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ad3c7ba-c8c5-41b2-9265-629bcec46589" (UID: "2ad3c7ba-c8c5-41b2-9265-629bcec46589"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:03.421012 master-0 kubenswrapper[17411]: I0223 13:30:03.420955 17411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm" (OuterVolumeSpecName: "kube-api-access-lj7fm") pod "2ad3c7ba-c8c5-41b2-9265-629bcec46589" (UID: "2ad3c7ba-c8c5-41b2-9265-629bcec46589"). InnerVolumeSpecName "kube-api-access-lj7fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:03.517600 master-0 kubenswrapper[17411]: I0223 13:30:03.517419 17411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad3c7ba-c8c5-41b2-9265-629bcec46589-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:03.517600 master-0 kubenswrapper[17411]: I0223 13:30:03.517490 17411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ad3c7ba-c8c5-41b2-9265-629bcec46589-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:03.517600 master-0 kubenswrapper[17411]: I0223 13:30:03.517507 17411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj7fm\" (UniqueName: \"kubernetes.io/projected/2ad3c7ba-c8c5-41b2-9265-629bcec46589-kube-api-access-lj7fm\") on node \"master-0\" DevicePath \"\"" Feb 23 13:30:03.974867 master-0 kubenswrapper[17411]: I0223 13:30:03.974794 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" event={"ID":"2ad3c7ba-c8c5-41b2-9265-629bcec46589","Type":"ContainerDied","Data":"69a71bf461bd58c09cb07395e975467d93fdec4ae79dbdb288d320f36f63bd32"} Feb 23 13:30:03.974867 master-0 kubenswrapper[17411]: I0223 13:30:03.974858 17411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a71bf461bd58c09cb07395e975467d93fdec4ae79dbdb288d320f36f63bd32" Feb 23 13:30:03.975205 master-0 kubenswrapper[17411]: I0223 13:30:03.974946 17411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-njzwm" Feb 23 13:32:08.237033 master-0 kubenswrapper[17411]: I0223 13:32:08.236924 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q4tzp/must-gather-qkhqw"] Feb 23 13:32:08.238000 master-0 kubenswrapper[17411]: E0223 13:32:08.237572 17411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad3c7ba-c8c5-41b2-9265-629bcec46589" containerName="collect-profiles" Feb 23 13:32:08.238000 master-0 kubenswrapper[17411]: I0223 13:32:08.237590 17411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad3c7ba-c8c5-41b2-9265-629bcec46589" containerName="collect-profiles" Feb 23 13:32:08.238000 master-0 kubenswrapper[17411]: I0223 13:32:08.237860 17411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad3c7ba-c8c5-41b2-9265-629bcec46589" containerName="collect-profiles" Feb 23 13:32:08.239088 master-0 kubenswrapper[17411]: I0223 13:32:08.239052 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.242276 master-0 kubenswrapper[17411]: I0223 13:32:08.242207 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q4tzp"/"openshift-service-ca.crt" Feb 23 13:32:08.246336 master-0 kubenswrapper[17411]: I0223 13:32:08.246271 17411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q4tzp"/"kube-root-ca.crt" Feb 23 13:32:08.259204 master-0 kubenswrapper[17411]: I0223 13:32:08.259149 17411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q4tzp/must-gather-zr8jf"] Feb 23 13:32:08.261494 master-0 kubenswrapper[17411]: I0223 13:32:08.261458 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.270144 master-0 kubenswrapper[17411]: I0223 13:32:08.270062 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q4tzp/must-gather-qkhqw"] Feb 23 13:32:08.281022 master-0 kubenswrapper[17411]: I0223 13:32:08.280858 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wsl\" (UniqueName: \"kubernetes.io/projected/e54cb2b4-c8b9-4ba8-badd-defa4054180a-kube-api-access-48wsl\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.281022 master-0 kubenswrapper[17411]: I0223 13:32:08.280958 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbb7\" (UniqueName: \"kubernetes.io/projected/043586bd-a2db-4fb2-8b98-fa9da2b05836-kube-api-access-bsbb7\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.281550 master-0 kubenswrapper[17411]: I0223 13:32:08.281412 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e54cb2b4-c8b9-4ba8-badd-defa4054180a-must-gather-output\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.281642 master-0 kubenswrapper[17411]: I0223 13:32:08.281570 17411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043586bd-a2db-4fb2-8b98-fa9da2b05836-must-gather-output\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.383691 master-0 kubenswrapper[17411]: I0223 13:32:08.383603 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e54cb2b4-c8b9-4ba8-badd-defa4054180a-must-gather-output\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.383980 master-0 kubenswrapper[17411]: I0223 13:32:08.383856 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043586bd-a2db-4fb2-8b98-fa9da2b05836-must-gather-output\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.383980 master-0 kubenswrapper[17411]: I0223 13:32:08.383934 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wsl\" (UniqueName: \"kubernetes.io/projected/e54cb2b4-c8b9-4ba8-badd-defa4054180a-kube-api-access-48wsl\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.383980 master-0 kubenswrapper[17411]: I0223 13:32:08.383964 17411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbb7\" (UniqueName: \"kubernetes.io/projected/043586bd-a2db-4fb2-8b98-fa9da2b05836-kube-api-access-bsbb7\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.384639 master-0 kubenswrapper[17411]: I0223 13:32:08.384566 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e54cb2b4-c8b9-4ba8-badd-defa4054180a-must-gather-output\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.384900 master-0 kubenswrapper[17411]: I0223 13:32:08.384772 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/043586bd-a2db-4fb2-8b98-fa9da2b05836-must-gather-output\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.419504 master-0 kubenswrapper[17411]: I0223 13:32:08.419434 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbb7\" (UniqueName: \"kubernetes.io/projected/043586bd-a2db-4fb2-8b98-fa9da2b05836-kube-api-access-bsbb7\") pod \"must-gather-zr8jf\" (UID: \"043586bd-a2db-4fb2-8b98-fa9da2b05836\") " pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:08.420145 master-0 kubenswrapper[17411]: I0223 13:32:08.420098 17411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wsl\" (UniqueName: \"kubernetes.io/projected/e54cb2b4-c8b9-4ba8-badd-defa4054180a-kube-api-access-48wsl\") pod \"must-gather-qkhqw\" (UID: \"e54cb2b4-c8b9-4ba8-badd-defa4054180a\") " pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.424868 master-0 kubenswrapper[17411]: I0223 13:32:08.424774 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q4tzp/must-gather-zr8jf"] Feb 23 13:32:08.559488 master-0 kubenswrapper[17411]: I0223 13:32:08.559318 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" Feb 23 13:32:08.636855 master-0 kubenswrapper[17411]: I0223 13:32:08.636786 17411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q4tzp/must-gather-zr8jf" Feb 23 13:32:09.060274 master-0 kubenswrapper[17411]: I0223 13:32:09.055951 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q4tzp/must-gather-qkhqw"] Feb 23 13:32:09.066481 master-0 kubenswrapper[17411]: I0223 13:32:09.065991 17411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:32:09.364673 master-0 kubenswrapper[17411]: I0223 13:32:09.364515 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" event={"ID":"e54cb2b4-c8b9-4ba8-badd-defa4054180a","Type":"ContainerStarted","Data":"a62c33e9c47226ac367f9dbc7dfdee5678cc0944277e10d48376a32c9c6baa26"} Feb 23 13:32:09.695319 master-0 kubenswrapper[17411]: I0223 13:32:09.694940 17411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q4tzp/must-gather-zr8jf"] Feb 23 13:32:10.374922 master-0 kubenswrapper[17411]: I0223 13:32:10.374837 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q4tzp/must-gather-zr8jf" event={"ID":"043586bd-a2db-4fb2-8b98-fa9da2b05836","Type":"ContainerStarted","Data":"3c47526b7485e1601a6d83d340f5388b74aac724ed1906f2d1d441a5be206b22"} Feb 23 13:32:12.393081 master-0 kubenswrapper[17411]: I0223 13:32:12.393006 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" event={"ID":"e54cb2b4-c8b9-4ba8-badd-defa4054180a","Type":"ContainerStarted","Data":"ba38ce1ad0b8756b3e12c3fcb684fdaaf1dd953e26cd2f00ac23c6f130e14b7e"} Feb 23 13:32:13.408734 master-0 kubenswrapper[17411]: I0223 13:32:13.408662 17411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" event={"ID":"e54cb2b4-c8b9-4ba8-badd-defa4054180a","Type":"ContainerStarted","Data":"0f1b2728588adf6087cdabb46f4b3635999cb6c64ddd7cb2bc86a1d9a2e4db8e"} Feb 23 13:32:13.817816 master-0 kubenswrapper[17411]: I0223 13:32:13.813349 17411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-q4tzp/must-gather-qkhqw" podStartSLOduration=3.017525783 podStartE2EDuration="5.813319518s" podCreationTimestamp="2026-02-23 13:32:08 +0000 UTC" firstStartedPulling="2026-02-23 13:32:09.065950954 +0000 UTC m=+1522.493457551" lastFinishedPulling="2026-02-23 13:32:11.861744689 +0000 UTC m=+1525.289251286" observedRunningTime="2026-02-23 13:32:13.796051638 +0000 UTC m=+1527.223558245" watchObservedRunningTime="2026-02-23 13:32:13.813319518 +0000 UTC m=+1527.240826135" Feb 23 13:32:14.949163 master-0 kubenswrapper[17411]: I0223 13:32:14.949100 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-j4p78_fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/cluster-version-operator/1.log" Feb 23 13:32:15.204492 master-0 kubenswrapper[17411]: I0223 13:32:15.204373 17411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-j4p78_fc576a63-0ea6-40c8-90bc-c44b5dc95ecd/cluster-version-operator/0.log"