Feb 24 00:08:52 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 24 00:08:53 crc kubenswrapper[5122]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.407464 5122 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410905 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410932 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410936 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410940 5122 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410944 5122 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410947 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410951 5122 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410954 5122 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410958 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410966 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410970 5122 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410974 5122 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410978 5122 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410982 5122 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410986 5122 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410990 5122 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410993 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.410996 5122 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411001 5122 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411004 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411007 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411011 5122 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411014 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411017 5122 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411021 5122 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411024 5122 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411027 5122 feature_gate.go:328] unrecognized feature gate: Example2 Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411031 5122 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411034 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411038 5122 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411042 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411046 5122 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411050 5122 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411055 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411059 5122 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411063 5122 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411082 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411086 5122 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411089 5122 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411093 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411096 5122 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411100 5122 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411105 5122 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411108 5122 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411113 5122 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411117 5122 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411125 5122 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411131 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411136 5122 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411140 5122 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411144 5122 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411148 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411154 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411158 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411162 5122 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411166 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411170 5122 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411173 5122 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411177 5122 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411180 5122 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411184 5122 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411187 5122 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411190 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411193 5122 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411196 5122 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411200 5122 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411205 5122 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411208 5122 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411211 5122 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411214 5122 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411218 5122 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411221 5122 feature_gate.go:328] unrecognized feature gate: Example Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411224 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411227 5122 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411233 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411236 5122 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411239 5122 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411242 5122 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411246 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411249 5122 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411252 5122 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411256 5122 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411259 5122 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411262 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411266 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.411269 5122 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414694 5122 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414727 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414733 5122 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414739 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414744 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414751 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414756 5122 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414762 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414767 5122 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414772 5122 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414777 5122 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414782 5122 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414790 5122 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414795 5122 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414800 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414805 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414810 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414814 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414819 5122 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414824 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414828 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414835 5122 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414842 5122 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414847 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414851 5122 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414858 5122 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414863 5122 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414868 5122 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414872 5122 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414878 5122 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414883 5122 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414888 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414893 5122 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414897 5122 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414902 5122 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414907 5122 feature_gate.go:328] unrecognized feature gate: Example2 Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414912 5122 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414917 5122 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414921 5122 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414926 5122 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414931 5122 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414936 5122 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414941 5122 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414945 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414950 5122 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414956 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414961 5122 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414966 5122 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414973 5122 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.414996 5122 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415002 5122 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415007 5122 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415013 5122 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415018 5122 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415023 5122 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415028 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415033 5122 feature_gate.go:328] unrecognized feature gate: Example Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415039 5122 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415045 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415061 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415095 5122 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415103 5122 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415108 5122 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415113 5122 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415118 5122 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415123 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415127 5122 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415132 5122 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415137 5122 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415143 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415148 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415152 5122 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415157 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415161 5122 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415166 5122 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415171 5122 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415175 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415180 5122 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415186 5122 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415191 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415196 5122 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415203 5122 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415208 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415214 5122 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415221 5122 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.415227 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415366 5122 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415387 5122 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415396 5122 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415405 5122 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415413 5122 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415419 5122 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415426 5122 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415434 5122 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415440 5122 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415446 5122 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415452 5122 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415458 5122 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415464 5122 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415470 5122 flags.go:64] FLAG: --cgroup-root="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415476 5122 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415481 5122 flags.go:64] FLAG: --client-ca-file="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415487 5122 flags.go:64] FLAG: --cloud-config="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415492 5122 flags.go:64] FLAG: --cloud-provider="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415497 5122 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415506 5122 flags.go:64] FLAG: --cluster-domain="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415511 5122 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415517 5122 flags.go:64] FLAG: --config-dir="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415523 5122 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415529 5122 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415536 5122 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415545 5122 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415550 5122 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415556 5122 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415562 5122 flags.go:64] FLAG: --contention-profiling="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415568 5122 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415573 5122 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415581 5122 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415586 5122 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415594 5122 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415600 5122 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415605 5122 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415610 5122 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415616 5122 flags.go:64] FLAG: --enable-server="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415622 5122 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415630 5122 flags.go:64] FLAG: --event-burst="100" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415637 5122 flags.go:64] FLAG: --event-qps="50" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415643 5122 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415650 5122 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415658 5122 flags.go:64] FLAG: --eviction-hard="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415665 5122 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415670 5122 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415676 5122 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415681 5122 flags.go:64] FLAG: --eviction-soft="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415687 5122 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415692 5122 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415698 5122 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415703 5122 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415714 5122 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415720 5122 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415725 5122 flags.go:64] FLAG: --feature-gates="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415734 5122 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415740 5122 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415747 5122 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415755 5122 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415762 5122 flags.go:64] FLAG: --healthz-port="10248" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415768 5122 flags.go:64] FLAG: --help="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415773 5122 flags.go:64] FLAG: --hostname-override="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415778 5122 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415783 5122 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415789 5122 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415794 5122 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415800 5122 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415805 5122 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415810 5122 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415816 5122 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415822 5122 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415827 5122 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415833 5122 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415839 5122 flags.go:64] FLAG: --kube-reserved="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415844 5122 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415849 5122 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415855 5122 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415860 5122 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415867 5122 flags.go:64] FLAG: --lock-file="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415873 5122 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415880 5122 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415887 5122 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415899 5122 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415906 5122 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415915 5122 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415921 5122 flags.go:64] FLAG: --logging-format="text" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415926 5122 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415933 5122 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415938 5122 flags.go:64] FLAG: --manifest-url="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415943 5122 flags.go:64] FLAG: --manifest-url-header="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415951 5122 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415958 5122 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415967 5122 flags.go:64] FLAG: --max-pods="110" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415972 5122 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415978 5122 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415983 5122 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415988 5122 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.415994 5122 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416000 5122 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416006 5122 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416019 5122 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416025 5122 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416031 5122 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416037 5122 flags.go:64] FLAG: --pod-cidr="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416043 5122 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416055 5122 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416062 5122 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416093 5122 flags.go:64] FLAG: --pods-per-core="0" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416099 5122 flags.go:64] FLAG: --port="10250" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416105 5122 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416110 5122 flags.go:64] FLAG: --provider-id="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416116 5122 flags.go:64] FLAG: --qos-reserved="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416121 5122 flags.go:64] FLAG: --read-only-port="10255" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416127 5122 flags.go:64] FLAG: --register-node="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416132 5122 flags.go:64] FLAG: --register-schedulable="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416137 5122 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416151 5122 flags.go:64] FLAG: --registry-burst="10" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416156 5122 flags.go:64] FLAG: --registry-qps="5" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416162 5122 flags.go:64] FLAG: --reserved-cpus="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416167 5122 flags.go:64] FLAG: --reserved-memory="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416173 5122 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416179 5122 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416185 5122 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416190 5122 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416196 5122 flags.go:64] FLAG: --runonce="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416202 5122 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416208 5122 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416213 5122 flags.go:64] FLAG: --seccomp-default="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416218 5122 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416224 5122 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416229 5122 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416235 5122 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416241 5122 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416246 5122 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416252 5122 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416257 5122 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416263 5122 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416268 5122 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416274 5122 flags.go:64] FLAG: --system-cgroups="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416279 5122 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416288 5122 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416293 5122 flags.go:64] FLAG: --tls-cert-file="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416298 5122 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416306 5122 flags.go:64] FLAG: --tls-min-version="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416312 5122 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416317 5122 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416322 5122 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416328 5122 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416336 5122 flags.go:64] FLAG: --v="2" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416379 5122 flags.go:64] FLAG: --version="false" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416387 5122 flags.go:64] FLAG: --vmodule="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416394 5122 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.416400 5122 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416559 5122 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416566 5122 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416571 5122 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416576 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416591 5122 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416597 5122 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416602 5122 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416607 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416612 5122 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416621 5122 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416626 5122 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416631 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416636 5122 feature_gate.go:328] unrecognized feature gate: Example2 Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416641 5122 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416645 5122 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416651 5122 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416656 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416661 5122 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416666 5122 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416670 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416675 5122 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416680 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416685 5122 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416690 5122 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416694 5122 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416699 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416704 5122 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416711 5122 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416715 5122 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416723 5122 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416729 5122 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416735 5122 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416740 5122 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416745 5122 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416751 5122 feature_gate.go:328] unrecognized feature gate: Example Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416755 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416760 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416773 5122 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416778 5122 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416783 5122 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416788 5122 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416795 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416800 5122 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416805 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416809 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416814 5122 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416819 5122 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416823 5122 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416828 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416833 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416838 5122 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416843 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416848 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416854 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416860 5122 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416866 5122 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416871 5122 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416877 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416883 5122 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416891 5122 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416898 5122 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416904 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416910 5122 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416916 5122 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416922 5122 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416928 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416934 5122 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416940 5122 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416945 5122 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416951 5122 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416967 5122 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416975 5122 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416982 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416990 5122 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.416996 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417001 5122 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417007 5122 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417012 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417018 5122 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417024 5122 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417029 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417035 5122 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417041 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417046 5122 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417052 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.417058 5122 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.419236 5122 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.435556 5122 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.435600 5122 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435662 5122 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435670 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435674 5122 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435678 5122 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435682 5122 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435685 5122 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435689 5122 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435692 5122 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435696 5122 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435704 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435708 5122 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435711 5122 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435714 5122 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435718 5122 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435721 5122 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435724 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435728 5122 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435732 5122 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435735 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435739 5122 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435742 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435745 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435748 5122 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435751 5122 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435754 5122 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435758 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435761 5122 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435764 5122 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435767 5122 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435770 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435774 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435781 5122 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435784 5122 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435789 5122 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435794 5122 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435798 5122 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435802 5122 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435805 5122 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435808 5122 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435812 5122 feature_gate.go:328] unrecognized feature gate: Example Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435815 5122 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435818 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435821 5122 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435825 5122 feature_gate.go:328] unrecognized feature gate: Example2 Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435828 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435831 5122 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435834 5122 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435837 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435841 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435844 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435847 5122 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435850 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435854 5122 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.435857 5122 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436041 5122 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436045 5122 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436048 5122 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436051 5122 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436054 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436057 5122 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436061 5122 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436064 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436067 5122 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436129 5122 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436143 5122 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436149 5122 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436153 5122 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436157 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436161 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436164 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436167 5122 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436171 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436174 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436178 5122 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436182 5122 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436185 5122 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436188 5122 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436191 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436195 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436198 5122 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436201 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436205 5122 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436208 5122 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436211 5122 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436214 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436218 5122 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.436224 5122 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436336 5122 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436343 5122 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436347 5122 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436351 5122 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436378 5122 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436384 5122 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436387 5122 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436392 5122 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436396 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436400 5122 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436405 5122 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436409 5122 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436413 5122 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436418 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436423 5122 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436427 5122 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436431 5122 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436435 5122 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436438 5122 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436442 5122 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436445 5122 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436448 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436452 5122 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436456 5122 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436460 5122 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436464 5122 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436468 5122 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436472 5122 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436476 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436480 5122 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436484 5122 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436488 5122 feature_gate.go:328] unrecognized feature gate: Example2 Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436491 5122 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436494 5122 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436497 5122 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436501 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436504 5122 feature_gate.go:328] unrecognized feature gate: Example Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436507 5122 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436510 5122 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436513 5122 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436517 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436520 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436523 5122 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436528 5122 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436531 5122 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436534 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436537 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436541 5122 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436544 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436547 5122 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436552 5122 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436555 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436558 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436561 5122 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436565 5122 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436568 5122 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436571 5122 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436574 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436578 5122 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436581 5122 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436584 5122 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436588 5122 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436592 5122 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436595 5122 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436598 5122 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436602 5122 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436605 5122 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436608 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436611 5122 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436615 5122 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436618 5122 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436621 5122 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436624 5122 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436627 5122 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436631 5122 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436634 5122 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436638 5122 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436641 5122 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436645 5122 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436648 5122 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436651 5122 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436654 5122 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436658 5122 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436661 5122 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436664 5122 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 24 00:08:53 crc kubenswrapper[5122]: W0224 00:08:53.436668 5122 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.436674 5122 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.436841 5122 server.go:962] "Client rotation is on, will bootstrap in background" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.442228 5122 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.446787 5122 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.446927 5122 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.448293 5122 server.go:1019] "Starting client certificate rotation" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.448569 5122 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.449948 5122 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.496148 5122 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.501359 5122 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.510050 5122 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.520814 5122 log.go:25] "Validated CRI v1 runtime API" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.601585 5122 log.go:25] "Validated CRI v1 image API" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.605427 5122 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.616023 5122 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-24-00-02-38-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.616061 5122 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.631063 5122 manager.go:217] Machine: {Timestamp:2026-02-24 00:08:53.628205667 +0000 UTC m=+0.717660190 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:e2261f0c-b7f7-46fe-a312-4eb5967f7e40 BootID:6c6c5e4a-ab9c-4e6a-ad00-267208aca03c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1f:7b:bc Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1f:7b:bc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f1:aa:86 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a5:fe:85 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:40:e9:26 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:20:30:97 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:d1:77:95:ab:de Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:ad:9b:71:d9:46 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.632123 5122 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.632286 5122 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.640546 5122 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.640622 5122 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.640834 5122 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.640852 5122 container_manager_linux.go:306] "Creating device plugin manager" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.640874 5122 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.643216 5122 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.643554 5122 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.643756 5122 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.647427 5122 kubelet.go:491] "Attempting to sync node with API server" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.647465 5122 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.647495 5122 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.647510 5122 kubelet.go:397] "Adding apiserver pod source" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.647533 5122 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.654026 5122 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.654061 5122 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.654656 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.656596 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.658349 5122 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.658382 5122 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.665248 5122 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.665536 5122 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.666373 5122 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668154 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668192 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668204 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668214 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668225 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668236 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668246 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668257 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668271 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668300 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668322 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.668787 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.671100 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.671128 5122 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.672842 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.692702 5122 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.692765 5122 server.go:1295] "Started kubelet" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.693011 5122 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.693263 5122 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.693460 5122 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.694212 5122 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 00:08:53 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.696372 5122 server.go:317] "Adding debug handlers to kubelet server" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.696912 5122 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.697832 5122 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.698667 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.698672 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.698808 5122 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.698834 5122 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.698885 5122 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.699096 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.698241 5122 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18970624d734bee0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,LastTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702150 5122 factory.go:55] Registering systemd factory Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702188 5122 factory.go:223] Registration of the systemd container factory successfully Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702507 5122 factory.go:153] Registering CRI-O factory Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702534 5122 factory.go:223] Registration of the crio container factory successfully Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702616 5122 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702646 5122 factory.go:103] Registering Raw factory Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.702661 5122 manager.go:1196] Started watching for new ooms in manager Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.703378 5122 manager.go:319] Starting recovery of all containers Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.738613 5122 manager.go:324] Recovery completed Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.743272 5122 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": inotify_add_watch /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.743320 5122 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-userpasswords.service": inotify_add_watch /sys/fs/cgroup/system.slice/ocp-userpasswords.service: no such file or directory Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.754762 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.756316 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.756361 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.756373 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.757575 5122 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.757594 5122 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.757616 5122 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.765857 5122 policy_none.go:49] "None policy: Start" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766130 5122 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766145 5122 state_mem.go:35] "Initializing new in-memory state store" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766130 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766196 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766212 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766225 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766241 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766252 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766263 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766275 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766290 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766302 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766314 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766326 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766337 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766351 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766367 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766380 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766390 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766402 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766413 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766425 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766437 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766447 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766457 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766468 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766480 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766491 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766502 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766513 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766530 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766543 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766554 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766579 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766590 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766601 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766613 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766624 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766635 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766646 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766656 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766666 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766675 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766683 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766692 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766700 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766716 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766725 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766734 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766746 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766756 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766765 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766775 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766786 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766796 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766807 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766816 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766825 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766841 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766851 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766860 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766869 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766880 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766888 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766898 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766908 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766918 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766926 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766936 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766946 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766955 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766965 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766974 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766983 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.766992 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767001 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767010 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767018 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767027 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767037 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767052 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767065 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767090 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767103 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767131 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767141 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767151 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767161 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767170 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767179 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767189 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767199 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767212 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767223 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767237 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767247 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767260 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.767982 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.768001 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.768014 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.768026 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.768041 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.771035 5122 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.773511 5122 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.773544 5122 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.773568 5122 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.773578 5122 kubelet.go:2451] "Starting kubelet main sync loop" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.773648 5122 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.774852 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775484 5122 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775549 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775573 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775587 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775602 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775617 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775631 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775645 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775658 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775671 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775686 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775703 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775719 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775763 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775777 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775791 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775803 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775822 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775836 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775850 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775863 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775876 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775889 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775903 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775915 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775929 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775942 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775954 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775967 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.775979 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776009 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776021 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776034 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776045 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776058 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776089 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776102 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776115 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776129 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776146 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776163 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776176 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776191 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776205 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776221 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776237 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776249 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776262 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776274 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776286 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776298 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776309 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776322 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776333 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776346 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776358 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776370 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776382 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776393 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776407 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776419 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776431 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776448 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776459 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776470 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776481 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776492 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776505 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776518 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776531 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776569 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776581 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776592 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776603 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776617 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776629 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776641 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776653 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776665 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776676 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776687 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776698 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776710 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776721 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776732 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776744 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776757 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776767 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776779 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776790 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776801 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776813 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776824 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776836 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776850 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776861 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776872 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776883 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776893 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776905 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776916 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776929 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776939 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776951 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776961 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776973 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776984 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.776996 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777006 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777017 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777028 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777039 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777049 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777060 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777092 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777107 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777121 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777138 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777156 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777169 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777180 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777190 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777201 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777214 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777260 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777317 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777330 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777343 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777354 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777365 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777378 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777392 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777404 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777415 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777426 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777437 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777449 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777462 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777474 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777486 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777497 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777508 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777519 5122 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777530 5122 reconstruct.go:97] "Volume reconstruction finished" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.777538 5122 reconciler.go:26] "Reconciler: start to sync state" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.799160 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.814268 5122 manager.go:341] "Starting Device Plugin manager" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.814567 5122 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.814586 5122 server.go:85] "Starting device plugin registration server" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.815032 5122 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.815045 5122 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.815305 5122 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.815377 5122 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.815387 5122 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.820154 5122 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.820225 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.874045 5122 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.874348 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.875351 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.875426 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.875441 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.876424 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.876769 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.876854 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878569 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878618 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878633 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878736 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878781 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.878796 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.879679 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.879825 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.879889 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880344 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880390 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880411 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880458 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880479 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.880492 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.881549 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.881777 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.881854 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.882044 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.882091 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.882105 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.882852 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.882971 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883011 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883026 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883174 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883260 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883268 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883517 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.883535 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.884462 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.884505 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885365 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885418 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885443 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885781 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885819 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.885838 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.899941 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.916134 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.917643 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.917718 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.917733 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.917770 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.918960 5122 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.920204 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.936906 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.963617 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.972303 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: E0224 00:08:53.977121 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979717 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979770 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979825 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979851 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979879 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979900 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979947 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.979971 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980002 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980022 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980119 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980265 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980307 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980503 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980507 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980702 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980747 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980780 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980811 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980865 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980904 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980934 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.980983 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981011 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981044 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981211 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981375 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981684 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.981781 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:53 crc kubenswrapper[5122]: I0224 00:08:53.982184 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082543 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082577 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082640 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082668 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082689 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082704 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082718 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082734 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082752 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082758 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082767 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082811 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082810 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082835 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082844 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082853 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082877 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082883 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082914 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082935 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082942 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082960 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082972 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082983 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083009 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083030 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083049 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083095 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.082918 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083123 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083144 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.083165 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.119162 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.120605 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.120811 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.120897 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.121005 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.121871 5122 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.221645 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.237317 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.265361 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.273960 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.277946 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:08:54 crc kubenswrapper[5122]: W0224 00:08:54.278656 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-f652d54e9b054fe5a351f79d3bccce06b53034e497882dea2c247127f7b44618 WatchSource:0}: Error finding container f652d54e9b054fe5a351f79d3bccce06b53034e497882dea2c247127f7b44618: Status 404 returned error can't find the container with id f652d54e9b054fe5a351f79d3bccce06b53034e497882dea2c247127f7b44618 Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.294757 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.302882 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Feb 24 00:08:54 crc kubenswrapper[5122]: W0224 00:08:54.318112 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-beb682b174ae954e43fa3bab681322411be93efd419d2f6536923b20f8450832 WatchSource:0}: Error finding container beb682b174ae954e43fa3bab681322411be93efd419d2f6536923b20f8450832: Status 404 returned error can't find the container with id beb682b174ae954e43fa3bab681322411be93efd419d2f6536923b20f8450832 Feb 24 00:08:54 crc kubenswrapper[5122]: W0224 00:08:54.323877 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-e67fc9e4d5b36069a1e5ec8b9d5265baae3b03ae170b2e597e178ddd45aa7f65 WatchSource:0}: Error finding container e67fc9e4d5b36069a1e5ec8b9d5265baae3b03ae170b2e597e178ddd45aa7f65: Status 404 returned error can't find the container with id e67fc9e4d5b36069a1e5ec8b9d5265baae3b03ae170b2e597e178ddd45aa7f65 Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.477936 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.522008 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.522836 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.522889 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.522905 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.522932 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.523337 5122 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.674118 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.737649 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.779543 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0296faf1c90c731267f0e8f339ed6f7234fffebde3032ff40d8463ce2b7112ec"} Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.780604 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"beb682b174ae954e43fa3bab681322411be93efd419d2f6536923b20f8450832"} Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.781514 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f652d54e9b054fe5a351f79d3bccce06b53034e497882dea2c247127f7b44618"} Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.782555 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3cfcbb4d691a0271d495527b7f0b6f003f05282095b2fa4dcc9402123ce09099"} Feb 24 00:08:54 crc kubenswrapper[5122]: I0224 00:08:54.783454 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"e67fc9e4d5b36069a1e5ec8b9d5265baae3b03ae170b2e597e178ddd45aa7f65"} Feb 24 00:08:54 crc kubenswrapper[5122]: E0224 00:08:54.913965 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.026702 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.104056 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.323763 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.325863 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.325926 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.325946 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.325982 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.326684 5122 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.669375 5122 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.670661 5122 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.673445 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.789134 5122 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8a710b6cd0aa59798f0a69c269a8b24d8b4f17ea6a579e90b47bb202c6169f5b" exitCode=0 Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.789288 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8a710b6cd0aa59798f0a69c269a8b24d8b4f17ea6a579e90b47bb202c6169f5b"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.789343 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.792128 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.792265 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.792297 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.793174 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794145 5122 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="a372e08af69843d3b18e6e4c24c44395e0da1c0a36d6b93fbba0efeebdce5cd8" exitCode=0 Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794258 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"a372e08af69843d3b18e6e4c24c44395e0da1c0a36d6b93fbba0efeebdce5cd8"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794295 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794868 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794894 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.794904 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.795107 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.795581 5122 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="8b58dd6893e4cef4d951983cf64df766277d25e7d254fdb9da4768cc60541d49" exitCode=0 Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.795606 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"8b58dd6893e4cef4d951983cf64df766277d25e7d254fdb9da4768cc60541d49"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.795681 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.796344 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.796368 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.796379 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.796535 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.798592 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"82c8818558feb94b0d67e95aa2cddd5d1293d6c4d3b927db398b4dedc3dbe6e7"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.798624 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8cd870d8a5266d17b821eea88d085de06b8be9f1ffb9d281f7f78e4e68bcf7f5"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800124 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4e2e508b94b0720c8553587b8cfb2f3ad7a5265f46b8e90239d02595822736e9" exitCode=0 Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800159 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4e2e508b94b0720c8553587b8cfb2f3ad7a5265f46b8e90239d02595822736e9"} Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800233 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800716 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800737 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.800745 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:55 crc kubenswrapper[5122]: E0224 00:08:55.800902 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.802809 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.803461 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.803520 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:55 crc kubenswrapper[5122]: I0224 00:08:55.803539 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.390285 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.673682 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.705281 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.803162 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"d6e44643d210ceb4920fcf1c5f1494b26afa004de1f7c12988c9d7999a371048"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.803196 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.803691 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.803715 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.803724 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.803876 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.806967 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"00d43efa38f5033cfa155c64f6d684c0248152b40fa12b642aaeab2cd652b289"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807088 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"363f5c9ff48c3fdaf5b9a6cc53eec30e0a9a336cc8aa985ae3d895d4b1090acf"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807163 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"5dde031f80d706aaad533a8ae7343d88019c52241161581110e7c1dd1e0a210a"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807311 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807821 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807910 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.807971 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.808178 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.810044 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b2415088a0b144b364e014dc2c2793295fa3acf33fdf864215058c6e2fc074ad"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.810224 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"30d7deb84151dc4c7e62cf03ab1e321de8aae77f535fd6edaaa05fb92be7de9b"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.810397 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.811701 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.811719 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.811729 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.811861 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.814886 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a68c1527a3daaf2edd8a58adc3928d53f63266e661d665d090ae7d0850e50d2e"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.814916 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cd66379a5e0fec18bb00729a9f9015cac040f0c1bc1927f73a7a5603f8d6fe10"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.814929 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7e62718d5fa1a2c8d163a016ae2607ec93029e94464ebf0518d890c39534e4b0"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.814940 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cfbf4f7e6544aaa90a5b7583d6b85e287ed0d459941edf55d5ac1fda8a1c905a"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.819726 5122 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="95e7c8780e3115cf933d38710ac800c34a21519bb26dde3726004c5d61525982" exitCode=0 Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.819753 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"95e7c8780e3115cf933d38710ac800c34a21519bb26dde3726004c5d61525982"} Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.819870 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.820329 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.820355 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.820363 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.820525 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.861057 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.927204 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.928111 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.928157 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.928173 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:56 crc kubenswrapper[5122]: I0224 00:08:56.928201 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.928682 5122 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Feb 24 00:08:56 crc kubenswrapper[5122]: E0224 00:08:56.949333 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.028829 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.673731 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.826360 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7df3e8bd34f3974c181ba3741474a63639e756f288773ff2d2c6c86c1b0697ed"} Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.826428 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.827055 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.827140 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.827161 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.827526 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829272 5122 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="26bcfa7e06bd18bc803b0a2c36b6d8824438deb1b4f449ab966718da5ed09c0b" exitCode=0 Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829379 5122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829404 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829435 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829470 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"26bcfa7e06bd18bc803b0a2c36b6d8824438deb1b4f449ab966718da5ed09c0b"} Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829641 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829922 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829949 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.829962 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830238 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830270 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830282 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830425 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.830484 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830546 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830645 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.830681 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.830810 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.831098 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.831148 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:57 crc kubenswrapper[5122]: I0224 00:08:57.831166 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.831354 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:57 crc kubenswrapper[5122]: E0224 00:08:57.831570 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.673903 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.835971 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b0404205dedaef25e5b8b2bad3ce06a78967e0ee0a405a9cdf31c2355a229ded"} Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.836047 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0981405f8113d4558dfdb52447b5fa4417fe60cfd98889f94fcdd01e93ed7316"} Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.836064 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9ee57fc103a97dd704af80ae8a445f763395baba3377d997f162f872b97c2d45"} Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.841579 5122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.841648 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.842441 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.842491 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.842501 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:58 crc kubenswrapper[5122]: E0224 00:08:58.842898 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.942725 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.943111 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.944500 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.944544 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:58 crc kubenswrapper[5122]: I0224 00:08:58.944558 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:58 crc kubenswrapper[5122]: E0224 00:08:58.945031 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.843424 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b96a9424f5348939a10e92657d35277dbd8158c886cd7153f52b0f6e04f4027e"} Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.843484 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2fb1a626a85873c32d3e6fd0269911dfe59a972075effd239133fc641726b461"} Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.843562 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.843982 5122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844109 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844438 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844506 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844521 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844700 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844741 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.844759 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:59 crc kubenswrapper[5122]: E0224 00:08:59.844872 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:59 crc kubenswrapper[5122]: E0224 00:08:59.845290 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:08:59 crc kubenswrapper[5122]: I0224 00:08:59.963046 5122 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.096836 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.097248 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.098557 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.098593 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.098607 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:00 crc kubenswrapper[5122]: E0224 00:09:00.098966 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.129265 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.130710 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.130768 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.130781 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.130815 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.846537 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.848494 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.848551 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:00 crc kubenswrapper[5122]: I0224 00:09:00.848564 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:00 crc kubenswrapper[5122]: E0224 00:09:00.849143 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.375229 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.375561 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.376881 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.376955 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.377011 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:01 crc kubenswrapper[5122]: E0224 00:09:01.377723 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.482999 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.483388 5122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.483448 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.484861 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.484944 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:01 crc kubenswrapper[5122]: I0224 00:09:01.484968 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:01 crc kubenswrapper[5122]: E0224 00:09:01.485605 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.229519 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.229864 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.231118 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.231260 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.231296 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:03 crc kubenswrapper[5122]: E0224 00:09:03.231935 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.716278 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.716640 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.717678 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.717755 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:03 crc kubenswrapper[5122]: I0224 00:09:03.717778 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:03 crc kubenswrapper[5122]: E0224 00:09:03.718634 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:03 crc kubenswrapper[5122]: E0224 00:09:03.820635 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.350775 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.351056 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.351998 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.352066 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.352116 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:04 crc kubenswrapper[5122]: E0224 00:09:04.352643 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.362277 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.375028 5122 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.375256 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.594000 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.594413 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.595590 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.595660 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.595685 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:04 crc kubenswrapper[5122]: E0224 00:09:04.596473 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.857095 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.857324 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.857946 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.858006 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.858025 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:04 crc kubenswrapper[5122]: E0224 00:09:04.858585 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:04 crc kubenswrapper[5122]: I0224 00:09:04.865547 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:05 crc kubenswrapper[5122]: I0224 00:09:05.859284 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:05 crc kubenswrapper[5122]: I0224 00:09:05.860529 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:05 crc kubenswrapper[5122]: I0224 00:09:05.860650 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:05 crc kubenswrapper[5122]: I0224 00:09:05.860685 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:05 crc kubenswrapper[5122]: E0224 00:09:05.861284 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:06 crc kubenswrapper[5122]: I0224 00:09:06.861984 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:06 crc kubenswrapper[5122]: I0224 00:09:06.863124 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:06 crc kubenswrapper[5122]: I0224 00:09:06.863175 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:06 crc kubenswrapper[5122]: I0224 00:09:06.863188 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:06 crc kubenswrapper[5122]: E0224 00:09:06.863605 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.673980 5122 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.675261 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.675351 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 24 00:09:08 crc kubenswrapper[5122]: E0224 00:09:08.835373 5122 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18970624d734bee0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,LastTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.868399 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.870354 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7df3e8bd34f3974c181ba3741474a63639e756f288773ff2d2c6c86c1b0697ed" exitCode=255 Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.870415 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7df3e8bd34f3974c181ba3741474a63639e756f288773ff2d2c6c86c1b0697ed"} Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.870570 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.871272 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.871337 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.871356 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:08 crc kubenswrapper[5122]: E0224 00:09:08.871910 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:08 crc kubenswrapper[5122]: I0224 00:09:08.872337 5122 scope.go:117] "RemoveContainer" containerID="7df3e8bd34f3974c181ba3741474a63639e756f288773ff2d2c6c86c1b0697ed" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.197212 5122 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.197305 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.876885 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.879268 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16"} Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.879655 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.880378 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.880604 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:09 crc kubenswrapper[5122]: I0224 00:09:09.880746 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:09 crc kubenswrapper[5122]: E0224 00:09:09.881450 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:09 crc kubenswrapper[5122]: E0224 00:09:09.907245 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.680169 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.680403 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.680930 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.681572 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.681609 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.681620 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:13 crc kubenswrapper[5122]: E0224 00:09:13.681922 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.685828 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:13 crc kubenswrapper[5122]: E0224 00:09:13.821187 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.891333 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.891862 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.891913 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:13 crc kubenswrapper[5122]: I0224 00:09:13.891931 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:13 crc kubenswrapper[5122]: E0224 00:09:13.892349 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.213024 5122 trace.go:236] Trace[799827564]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:09:01.234) (total time: 12977ms): Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[799827564]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 12977ms (00:09:14.212) Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[799827564]: [12.977986121s] [12.977986121s] END Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.213523 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.213024 5122 trace.go:236] Trace[2023196573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:09:02.218) (total time: 11994ms): Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[2023196573]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 11994ms (00:09:14.212) Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[2023196573]: [11.994622403s] [11.994622403s] END Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.213816 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.213201 5122 trace.go:236] Trace[1517766225]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:09:02.958) (total time: 11254ms): Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[1517766225]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 11254ms (00:09:14.213) Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[1517766225]: [11.254806871s] [11.254806871s] END Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.214064 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.213298 5122 trace.go:236] Trace[1623161344]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:09:01.167) (total time: 13045ms): Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[1623161344]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 13045ms (00:09:14.213) Feb 24 00:09:14 crc kubenswrapper[5122]: Trace[1623161344]: [13.045720616s] [13.045720616s] END Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.214380 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.217943 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.240108 5122 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.376290 5122 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.376413 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.624156 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.624487 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.625455 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.625488 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.625498 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.625843 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.644361 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.682454 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.894456 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.894529 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895438 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895483 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895494 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895724 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895794 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:14 crc kubenswrapper[5122]: I0224 00:09:14.895806 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.895876 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:14 crc kubenswrapper[5122]: E0224 00:09:14.896356 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.681758 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.897819 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.898499 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.900425 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16" exitCode=255 Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.900476 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16"} Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.900519 5122 scope.go:117] "RemoveContainer" containerID="7df3e8bd34f3974c181ba3741474a63639e756f288773ff2d2c6c86c1b0697ed" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.900725 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.901272 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.901359 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.901434 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:15 crc kubenswrapper[5122]: E0224 00:09:15.901799 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:15 crc kubenswrapper[5122]: I0224 00:09:15.902151 5122 scope.go:117] "RemoveContainer" containerID="282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16" Feb 24 00:09:15 crc kubenswrapper[5122]: E0224 00:09:15.902716 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:09:16 crc kubenswrapper[5122]: E0224 00:09:16.317558 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:16 crc kubenswrapper[5122]: I0224 00:09:16.678726 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:16 crc kubenswrapper[5122]: I0224 00:09:16.908025 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 24 00:09:17 crc kubenswrapper[5122]: I0224 00:09:17.680455 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:18 crc kubenswrapper[5122]: I0224 00:09:18.678837 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.841370 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624d734bee0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,LastTimestamp:2026-02-24 00:08:53.692727008 +0000 UTC m=+0.782181531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.848270 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.852734 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.857029 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.861346 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624deadf0fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.818110202 +0000 UTC m=+0.907564715,LastTimestamp:2026-02-24 00:08:53.818110202 +0000 UTC m=+0.907564715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.868295 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.875398141 +0000 UTC m=+0.964852654,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.875869 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.875434902 +0000 UTC m=+0.964889425,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.880638 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.875448963 +0000 UTC m=+0.964903476,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.887463 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.87860342 +0000 UTC m=+0.968057933,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.892037 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.878627831 +0000 UTC m=+0.968082344,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.900498 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.878640351 +0000 UTC m=+0.968094864,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.909417 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.878765596 +0000 UTC m=+0.968220119,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.916811 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.878789277 +0000 UTC m=+0.968243790,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.923589 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.878801968 +0000 UTC m=+0.968256481,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.930819 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.880367061 +0000 UTC m=+0.969821584,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.934942 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.880400912 +0000 UTC m=+0.969855435,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.940153 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.880419453 +0000 UTC m=+0.969873976,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.946353 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.880470845 +0000 UTC m=+0.969925358,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.953666 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.880486186 +0000 UTC m=+0.969940699,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.958718 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.880497606 +0000 UTC m=+0.969952119,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.964341 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.882057769 +0000 UTC m=+0.971512282,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.971142 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.882099751 +0000 UTC m=+0.971554264,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.979910 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624dafffe81\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624dafffe81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756378753 +0000 UTC m=+0.845833266,LastTimestamp:2026-02-24 00:08:53.882109751 +0000 UTC m=+0.971564254,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.984622 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daff70be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daff70be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756342462 +0000 UTC m=+0.845796975,LastTimestamp:2026-02-24 00:08:53.882997197 +0000 UTC m=+0.972451710,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.990582 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18970624daffd217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18970624daffd217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:53.756367383 +0000 UTC m=+0.845821896,LastTimestamp:2026-02-24 00:08:53.883020218 +0000 UTC m=+0.972474731,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:18 crc kubenswrapper[5122]: E0224 00:09:18.995736 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970624fb37f841 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.296918081 +0000 UTC m=+1.386372654,LastTimestamp:2026-02-24 00:08:54.296918081 +0000 UTC m=+1.386372654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.000803 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18970624fb62c18c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.299722124 +0000 UTC m=+1.389176697,LastTimestamp:2026-02-24 00:08:54.299722124 +0000 UTC m=+1.389176697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.008049 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18970624fcab7bc2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.321265602 +0000 UTC m=+1.410720135,LastTimestamp:2026-02-24 00:08:54.321265602 +0000 UTC m=+1.410720135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.015380 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18970624fcfcf2a9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.326604457 +0000 UTC m=+1.416059000,LastTimestamp:2026-02-24 00:08:54.326604457 +0000 UTC m=+1.416059000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.021140 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18970624fd3d1659 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.330807897 +0000 UTC m=+1.420262420,LastTimestamp:2026-02-24 00:08:54.330807897 +0000 UTC m=+1.420262420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.025962 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897062523b2b06d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.976049261 +0000 UTC m=+2.065503774,LastTimestamp:2026-02-24 00:08:54.976049261 +0000 UTC m=+2.065503774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.030520 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897062523fd7c36 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.980951094 +0000 UTC m=+2.070405607,LastTimestamp:2026-02-24 00:08:54.980951094 +0000 UTC m=+2.070405607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.034706 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062523ffb6e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.98109719 +0000 UTC m=+2.070551703,LastTimestamp:2026-02-24 00:08:54.98109719 +0000 UTC m=+2.070551703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.041577 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706252405a7e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.981486565 +0000 UTC m=+2.070941078,LastTimestamp:2026-02-24 00:08:54.981486565 +0000 UTC m=+2.070941078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.049006 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625240b224b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.981845579 +0000 UTC m=+2.071300092,LastTimestamp:2026-02-24 00:08:54.981845579 +0000 UTC m=+2.071300092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.056585 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18970625242be986 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.983993734 +0000 UTC m=+2.073448247,LastTimestamp:2026-02-24 00:08:54.983993734 +0000 UTC m=+2.073448247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.060898 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062524a2cc65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.991785061 +0000 UTC m=+2.081239564,LastTimestamp:2026-02-24 00:08:54.991785061 +0000 UTC m=+2.081239564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.063514 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062524b5bc1f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.993026079 +0000 UTC m=+2.082480592,LastTimestamp:2026-02-24 00:08:54.993026079 +0000 UTC m=+2.082480592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.068392 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062524c48fe6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.993997798 +0000 UTC m=+2.083452311,LastTimestamp:2026-02-24 00:08:54.993997798 +0000 UTC m=+2.083452311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.072982 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897062524c47d58 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.993993048 +0000 UTC m=+2.083447551,LastTimestamp:2026-02-24 00:08:54.993993048 +0000 UTC m=+2.083447551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.077325 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897062524c4f211 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:54.994022929 +0000 UTC m=+2.083477442,LastTimestamp:2026-02-24 00:08:54.994022929 +0000 UTC m=+2.083477442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.084965 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062537a8c961 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.310944609 +0000 UTC m=+2.400399162,LastTimestamp:2026-02-24 00:08:55.310944609 +0000 UTC m=+2.400399162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.092092 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062538b6cf4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.328640846 +0000 UTC m=+2.418095369,LastTimestamp:2026-02-24 00:08:55.328640846 +0000 UTC m=+2.418095369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.099787 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062538d178f6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.330388214 +0000 UTC m=+2.419842767,LastTimestamp:2026-02-24 00:08:55.330388214 +0000 UTC m=+2.419842767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.104559 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625547d0887 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.794616455 +0000 UTC m=+2.884070968,LastTimestamp:2026-02-24 00:08:55.794616455 +0000 UTC m=+2.884070968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.110507 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18970625549d536a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.796732778 +0000 UTC m=+2.886187301,LastTimestamp:2026-02-24 00:08:55.796732778 +0000 UTC m=+2.886187301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.117214 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897062554b56665 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.798310501 +0000 UTC m=+2.887765014,LastTimestamp:2026-02-24 00:08:55.798310501 +0000 UTC m=+2.887765014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.125021 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062554f65da2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.802568098 +0000 UTC m=+2.892022631,LastTimestamp:2026-02-24 00:08:55.802568098 +0000 UTC m=+2.892022631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.131931 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189706255ae25aa7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.901919911 +0000 UTC m=+2.991374424,LastTimestamp:2026-02-24 00:08:55.901919911 +0000 UTC m=+2.991374424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.138357 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189706255cf7bd01 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.936875777 +0000 UTC m=+3.026330290,LastTimestamp:2026-02-24 00:08:55.936875777 +0000 UTC m=+3.026330290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.146133 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189706255d07bb44 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:55.937923908 +0000 UTC m=+3.027378421,LastTimestamp:2026-02-24 00:08:55.937923908 +0000 UTC m=+3.027378421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.151463 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706256290bc1b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.030788635 +0000 UTC m=+3.120243148,LastTimestamp:2026-02-24 00:08:56.030788635 +0000 UTC m=+3.120243148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.157213 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897062562bfafb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.033865656 +0000 UTC m=+3.123320169,LastTimestamp:2026-02-24 00:08:56.033865656 +0000 UTC m=+3.123320169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.162575 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062562dd03f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.035787762 +0000 UTC m=+3.125242275,LastTimestamp:2026-02-24 00:08:56.035787762 +0000 UTC m=+3.125242275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.166793 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897062562e4d262 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.036299362 +0000 UTC m=+3.125753875,LastTimestamp:2026-02-24 00:08:56.036299362 +0000 UTC m=+3.125753875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.174132 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897062563bd8f9e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.050503582 +0000 UTC m=+3.139958095,LastTimestamp:2026-02-24 00:08:56.050503582 +0000 UTC m=+3.139958095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.179719 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897062563d902cf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.052302543 +0000 UTC m=+3.141757056,LastTimestamp:2026-02-24 00:08:56.052302543 +0000 UTC m=+3.141757056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.186059 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706256466975c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.061581148 +0000 UTC m=+3.151035661,LastTimestamp:2026-02-24 00:08:56.061581148 +0000 UTC m=+3.151035661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.192130 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189706256466f8dd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.061606109 +0000 UTC m=+3.151060622,LastTimestamp:2026-02-24 00:08:56.061606109 +0000 UTC m=+3.151060622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.199490 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062564691366 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.061743974 +0000 UTC m=+3.151198487,LastTimestamp:2026-02-24 00:08:56.061743974 +0000 UTC m=+3.151198487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.204295 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706256492ea5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.064485982 +0000 UTC m=+3.153940525,LastTimestamp:2026-02-24 00:08:56.064485982 +0000 UTC m=+3.153940525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.210295 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062569791207 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.146678279 +0000 UTC m=+3.236132792,LastTimestamp:2026-02-24 00:08:56.146678279 +0000 UTC m=+3.236132792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.215510 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189706256a488c3a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.160275514 +0000 UTC m=+3.249730027,LastTimestamp:2026-02-24 00:08:56.160275514 +0000 UTC m=+3.249730027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.220914 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706256f062ad6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.239811286 +0000 UTC m=+3.329265799,LastTimestamp:2026-02-24 00:08:56.239811286 +0000 UTC m=+3.329265799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.225923 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706256f633ddf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.245911007 +0000 UTC m=+3.335365530,LastTimestamp:2026-02-24 00:08:56.245911007 +0000 UTC m=+3.335365530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.230326 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706256fe69210 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.254517776 +0000 UTC m=+3.343972289,LastTimestamp:2026-02-24 00:08:56.254517776 +0000 UTC m=+3.343972289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.236465 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706256ff60569 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.255530345 +0000 UTC m=+3.344984858,LastTimestamp:2026-02-24 00:08:56.255530345 +0000 UTC m=+3.344984858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.242909 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062570719dd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.263630294 +0000 UTC m=+3.353084807,LastTimestamp:2026-02-24 00:08:56.263630294 +0000 UTC m=+3.353084807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.248217 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706257086d51b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.265020699 +0000 UTC m=+3.354475212,LastTimestamp:2026-02-24 00:08:56.265020699 +0000 UTC m=+3.354475212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.253509 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706257e00a2ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.491107052 +0000 UTC m=+3.580561575,LastTimestamp:2026-02-24 00:08:56.491107052 +0000 UTC m=+3.580561575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.258958 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706257e1fcdd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.493149651 +0000 UTC m=+3.582604164,LastTimestamp:2026-02-24 00:08:56.493149651 +0000 UTC m=+3.582604164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.264000 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706257f1700e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.509350121 +0000 UTC m=+3.598804634,LastTimestamp:2026-02-24 00:08:56.509350121 +0000 UTC m=+3.598804634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.268672 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706257f27c3bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.510448573 +0000 UTC m=+3.599903096,LastTimestamp:2026-02-24 00:08:56.510448573 +0000 UTC m=+3.599903096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.273614 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189706257f630f01 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.514334465 +0000 UTC m=+3.603788978,LastTimestamp:2026-02-24 00:08:56.514334465 +0000 UTC m=+3.603788978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.278984 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706258a33b2bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.695780029 +0000 UTC m=+3.785234532,LastTimestamp:2026-02-24 00:08:56.695780029 +0000 UTC m=+3.785234532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.284175 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062590024ecc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.793206476 +0000 UTC m=+3.882660989,LastTimestamp:2026-02-24 00:08:56.793206476 +0000 UTC m=+3.882660989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.289168 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062590136c04 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.794328068 +0000 UTC m=+3.883782581,LastTimestamp:2026-02-24 00:08:56.794328068 +0000 UTC m=+3.883782581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.294780 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897062591b5a559 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.821736793 +0000 UTC m=+3.911191306,LastTimestamp:2026-02-24 00:08:56.821736793 +0000 UTC m=+3.911191306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.299656 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259e713d1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.03535747 +0000 UTC m=+4.124811983,LastTimestamp:2026-02-24 00:08:57.03535747 +0000 UTC m=+4.124811983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.305351 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706259f57af90 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.050460048 +0000 UTC m=+4.139914561,LastTimestamp:2026-02-24 00:08:57.050460048 +0000 UTC m=+4.139914561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.313679 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259f78c8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.052629221 +0000 UTC m=+4.142083734,LastTimestamp:2026-02-24 00:08:57.052629221 +0000 UTC m=+4.142083734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.319462 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625a0257eae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.06394795 +0000 UTC m=+4.153402463,LastTimestamp:2026-02-24 00:08:57.06394795 +0000 UTC m=+4.153402463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.326248 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625cdf0c84f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.832245327 +0000 UTC m=+4.921699870,LastTimestamp:2026-02-24 00:08:57.832245327 +0000 UTC m=+4.921699870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.332617 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625dbb2c3b7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.063061943 +0000 UTC m=+5.152516496,LastTimestamp:2026-02-24 00:08:58.063061943 +0000 UTC m=+5.152516496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.340134 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625dc86ff05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.076970757 +0000 UTC m=+5.166425290,LastTimestamp:2026-02-24 00:08:58.076970757 +0000 UTC m=+5.166425290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.347861 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625dc9f9c24 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.078583844 +0000 UTC m=+5.168038367,LastTimestamp:2026-02-24 00:08:58.078583844 +0000 UTC m=+5.168038367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.354832 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625eddf087a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.367953018 +0000 UTC m=+5.457407541,LastTimestamp:2026-02-24 00:08:58.367953018 +0000 UTC m=+5.457407541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.360616 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625eeff6faf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.386853807 +0000 UTC m=+5.476308360,LastTimestamp:2026-02-24 00:08:58.386853807 +0000 UTC m=+5.476308360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.367870 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970625ef171ae2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.388404962 +0000 UTC m=+5.477859475,LastTimestamp:2026-02-24 00:08:58.388404962 +0000 UTC m=+5.477859475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.373323 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897062600419e37 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.676403767 +0000 UTC m=+5.765858290,LastTimestamp:2026-02-24 00:08:58.676403767 +0000 UTC m=+5.765858290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.379643 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18970626013a68a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.692708519 +0000 UTC m=+5.782163042,LastTimestamp:2026-02-24 00:08:58.692708519 +0000 UTC m=+5.782163042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.384806 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706260153b27a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.694365818 +0000 UTC m=+5.783820331,LastTimestamp:2026-02-24 00:08:58.694365818 +0000 UTC m=+5.783820331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.390708 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706260e244dd1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.909363665 +0000 UTC m=+5.998818178,LastTimestamp:2026-02-24 00:08:58.909363665 +0000 UTC m=+5.998818178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.397825 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706260ebee039 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.919493689 +0000 UTC m=+6.008948202,LastTimestamp:2026-02-24 00:08:58.919493689 +0000 UTC m=+6.008948202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.402963 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706260ed9bedc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:58.92125462 +0000 UTC m=+6.010709143,LastTimestamp:2026-02-24 00:08:58.92125462 +0000 UTC m=+6.010709143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.409484 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706261da743f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:59.169604594 +0000 UTC m=+6.259059107,LastTimestamp:2026-02-24 00:08:59.169604594 +0000 UTC m=+6.259059107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.414192 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189706261e77b42b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:59.183264811 +0000 UTC m=+6.272719324,LastTimestamp:2026-02-24 00:08:59.183264811 +0000 UTC m=+6.272719324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.422630 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 00:09:19 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-controller-manager-crc.1897062753ee601c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 24 00:09:19 crc kubenswrapper[5122]: body: Feb 24 00:09:19 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:04.37520182 +0000 UTC m=+11.464656403,LastTimestamp:2026-02-24 00:09:04.37520182 +0000 UTC m=+11.464656403,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:09:19 crc kubenswrapper[5122]: > Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.428123 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062753f0bf29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:04.375357225 +0000 UTC m=+11.464811778,LastTimestamp:2026-02-24 00:09:04.375357225 +0000 UTC m=+11.464811778,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.432962 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 00:09:19 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-apiserver-crc.18970628543b5c42 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 00:09:19 crc kubenswrapper[5122]: body: Feb 24 00:09:19 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:08.675214402 +0000 UTC m=+15.764668935,LastTimestamp:2026-02-24 00:09:08.675214402 +0000 UTC m=+15.764668935,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:09:19 crc kubenswrapper[5122]: > Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.437908 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18970628543d23bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:08.675331005 +0000 UTC m=+15.764785528,LastTimestamp:2026-02-24 00:09:08.675331005 +0000 UTC m=+15.764785528,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.445298 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062590136c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062590136c04 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.794328068 +0000 UTC m=+3.883782581,LastTimestamp:2026-02-24 00:09:08.873483684 +0000 UTC m=+15.962938197,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.451166 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189706259e713d1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259e713d1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.03535747 +0000 UTC m=+4.124811983,LastTimestamp:2026-02-24 00:09:09.120688985 +0000 UTC m=+16.210143518,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.457479 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189706259f78c8e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259f78c8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.052629221 +0000 UTC m=+4.142083734,LastTimestamp:2026-02-24 00:09:09.129403168 +0000 UTC m=+16.218857691,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.461696 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 00:09:19 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-apiserver-crc.1897062873594bb1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 00:09:19 crc kubenswrapper[5122]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 00:09:19 crc kubenswrapper[5122]: Feb 24 00:09:19 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:09.197269937 +0000 UTC m=+16.286724480,LastTimestamp:2026-02-24 00:09:09.197269937 +0000 UTC m=+16.286724480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:09:19 crc kubenswrapper[5122]: > Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.465153 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18970628735a47ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:09.197334509 +0000 UTC m=+16.286789062,LastTimestamp:2026-02-24 00:09:09.197334509 +0000 UTC m=+16.286789062,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.469049 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897062753ee601c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 00:09:19 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-controller-manager-crc.1897062753ee601c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 24 00:09:19 crc kubenswrapper[5122]: body: Feb 24 00:09:19 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:04.37520182 +0000 UTC m=+11.464656403,LastTimestamp:2026-02-24 00:09:14.376362482 +0000 UTC m=+21.465817005,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:09:19 crc kubenswrapper[5122]: > Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.472811 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897062753f0bf29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897062753f0bf29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:04.375357225 +0000 UTC m=+11.464811778,LastTimestamp:2026-02-24 00:09:14.376443275 +0000 UTC m=+21.465897818,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: E0224 00:09:19.476802 5122 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062a0305c9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,LastTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:19 crc kubenswrapper[5122]: I0224 00:09:19.677452 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.618138 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.619370 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.619453 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.619483 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.619535 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:20 crc kubenswrapper[5122]: E0224 00:09:20.630451 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:20 crc kubenswrapper[5122]: I0224 00:09:20.682334 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.382873 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.383167 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.384044 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.384140 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.384166 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:21 crc kubenswrapper[5122]: E0224 00:09:21.384884 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.390139 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.681009 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.919428 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.920280 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.920350 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:21 crc kubenswrapper[5122]: I0224 00:09:21.920376 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:21 crc kubenswrapper[5122]: E0224 00:09:21.920986 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:22 crc kubenswrapper[5122]: E0224 00:09:22.438856 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:09:22 crc kubenswrapper[5122]: I0224 00:09:22.682602 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:22 crc kubenswrapper[5122]: E0224 00:09:22.990779 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:09:23 crc kubenswrapper[5122]: E0224 00:09:23.329103 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.630301 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.630600 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.631594 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.631724 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.631756 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:23 crc kubenswrapper[5122]: E0224 00:09:23.632573 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.632982 5122 scope.go:117] "RemoveContainer" containerID="282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16" Feb 24 00:09:23 crc kubenswrapper[5122]: E0224 00:09:23.633364 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:09:23 crc kubenswrapper[5122]: E0224 00:09:23.641636 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062a0305c9c6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062a0305c9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,LastTimestamp:2026-02-24 00:09:23.633306164 +0000 UTC m=+30.722760717,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:23 crc kubenswrapper[5122]: I0224 00:09:23.679766 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:23 crc kubenswrapper[5122]: E0224 00:09:23.821784 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:24 crc kubenswrapper[5122]: E0224 00:09:24.419548 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:09:24 crc kubenswrapper[5122]: I0224 00:09:24.679876 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:25 crc kubenswrapper[5122]: I0224 00:09:25.680835 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:25 crc kubenswrapper[5122]: E0224 00:09:25.681279 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:09:26 crc kubenswrapper[5122]: I0224 00:09:26.679623 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.631329 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.632692 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.632737 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.632747 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.632778 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:27 crc kubenswrapper[5122]: E0224 00:09:27.645799 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:27 crc kubenswrapper[5122]: I0224 00:09:27.680686 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:28 crc kubenswrapper[5122]: I0224 00:09:28.679516 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:29 crc kubenswrapper[5122]: I0224 00:09:29.680760 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:30 crc kubenswrapper[5122]: E0224 00:09:30.337309 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:30 crc kubenswrapper[5122]: I0224 00:09:30.680602 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:31 crc kubenswrapper[5122]: I0224 00:09:31.682755 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:32 crc kubenswrapper[5122]: I0224 00:09:32.682999 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:33 crc kubenswrapper[5122]: I0224 00:09:33.679186 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:33 crc kubenswrapper[5122]: E0224 00:09:33.822684 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.646546 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.648025 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.648110 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.648132 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.648172 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:34 crc kubenswrapper[5122]: E0224 00:09:34.666217 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:34 crc kubenswrapper[5122]: I0224 00:09:34.681067 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:35 crc kubenswrapper[5122]: I0224 00:09:35.679480 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:36 crc kubenswrapper[5122]: I0224 00:09:36.681908 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:37 crc kubenswrapper[5122]: E0224 00:09:37.343638 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.677822 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.774262 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.775465 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.775510 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.775524 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:37 crc kubenswrapper[5122]: E0224 00:09:37.775955 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.776339 5122 scope.go:117] "RemoveContainer" containerID="282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16" Feb 24 00:09:37 crc kubenswrapper[5122]: E0224 00:09:37.783177 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062590136c04\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062590136c04 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:56.794328068 +0000 UTC m=+3.883782581,LastTimestamp:2026-02-24 00:09:37.777488331 +0000 UTC m=+44.866942854,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:37 crc kubenswrapper[5122]: E0224 00:09:37.959599 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189706259e713d1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259e713d1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.03535747 +0000 UTC m=+4.124811983,LastTimestamp:2026-02-24 00:09:37.95498698 +0000 UTC m=+45.044441493,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:37 crc kubenswrapper[5122]: I0224 00:09:37.970190 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 24 00:09:37 crc kubenswrapper[5122]: E0224 00:09:37.976090 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189706259f78c8e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189706259f78c8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:08:57.052629221 +0000 UTC m=+4.142083734,LastTimestamp:2026-02-24 00:09:37.96979809 +0000 UTC m=+45.059252633,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.678524 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.977886 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.980998 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755"} Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.981522 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.982765 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.982814 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:38 crc kubenswrapper[5122]: I0224 00:09:38.982829 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:38 crc kubenswrapper[5122]: E0224 00:09:38.983358 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:39 crc kubenswrapper[5122]: E0224 00:09:39.087448 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.680521 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.984945 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.985369 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.987354 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" exitCode=255 Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.987397 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755"} Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.987433 5122 scope.go:117] "RemoveContainer" containerID="282599ca9018ca3d0f0d4d2a8d7c09268ce16bf9abe42c4a6797d2d56c2d4e16" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.987607 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.988579 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.988611 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.988624 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:39 crc kubenswrapper[5122]: E0224 00:09:39.988933 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:39 crc kubenswrapper[5122]: I0224 00:09:39.989203 5122 scope.go:117] "RemoveContainer" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" Feb 24 00:09:39 crc kubenswrapper[5122]: E0224 00:09:39.989413 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:09:40 crc kubenswrapper[5122]: E0224 00:09:40.002368 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062a0305c9c6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062a0305c9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,LastTimestamp:2026-02-24 00:09:39.989378728 +0000 UTC m=+47.078833241,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:40 crc kubenswrapper[5122]: I0224 00:09:40.678608 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:40 crc kubenswrapper[5122]: I0224 00:09:40.993394 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.666697 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.667995 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.668108 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.668135 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.668180 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:41 crc kubenswrapper[5122]: I0224 00:09:41.680277 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:41 crc kubenswrapper[5122]: E0224 00:09:41.688513 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:42 crc kubenswrapper[5122]: E0224 00:09:42.421508 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 24 00:09:42 crc kubenswrapper[5122]: I0224 00:09:42.679496 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.630142 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.630411 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.632048 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.632129 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.632148 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:43 crc kubenswrapper[5122]: E0224 00:09:43.632527 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.632824 5122 scope.go:117] "RemoveContainer" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" Feb 24 00:09:43 crc kubenswrapper[5122]: E0224 00:09:43.633063 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:09:43 crc kubenswrapper[5122]: E0224 00:09:43.641972 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062a0305c9c6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062a0305c9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,LastTimestamp:2026-02-24 00:09:43.633019191 +0000 UTC m=+50.722473714,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:43 crc kubenswrapper[5122]: I0224 00:09:43.683521 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:43 crc kubenswrapper[5122]: E0224 00:09:43.823272 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:44 crc kubenswrapper[5122]: E0224 00:09:44.350047 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:44 crc kubenswrapper[5122]: I0224 00:09:44.680797 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:45 crc kubenswrapper[5122]: I0224 00:09:45.680698 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:46 crc kubenswrapper[5122]: I0224 00:09:46.680347 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:47 crc kubenswrapper[5122]: E0224 00:09:47.179063 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 24 00:09:47 crc kubenswrapper[5122]: E0224 00:09:47.442290 5122 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 24 00:09:47 crc kubenswrapper[5122]: I0224 00:09:47.677112 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.680127 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.689019 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.694922 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.695002 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.695037 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.695109 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:48 crc kubenswrapper[5122]: E0224 00:09:48.711936 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.982163 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.982587 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.983851 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.983953 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.983975 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:48 crc kubenswrapper[5122]: E0224 00:09:48.984606 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:48 crc kubenswrapper[5122]: I0224 00:09:48.985011 5122 scope.go:117] "RemoveContainer" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" Feb 24 00:09:48 crc kubenswrapper[5122]: E0224 00:09:48.985388 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:09:48 crc kubenswrapper[5122]: E0224 00:09:48.993334 5122 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897062a0305c9c6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897062a0305c9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:15.90268359 +0000 UTC m=+22.992138103,LastTimestamp:2026-02-24 00:09:48.98533156 +0000 UTC m=+56.074786113,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:49 crc kubenswrapper[5122]: I0224 00:09:49.680513 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.105170 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.105502 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.106707 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.106757 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.106772 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:50 crc kubenswrapper[5122]: E0224 00:09:50.107157 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:09:50 crc kubenswrapper[5122]: I0224 00:09:50.682365 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:51 crc kubenswrapper[5122]: E0224 00:09:51.358578 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:51 crc kubenswrapper[5122]: I0224 00:09:51.681753 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:52 crc kubenswrapper[5122]: I0224 00:09:52.680145 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:53 crc kubenswrapper[5122]: I0224 00:09:53.678423 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:53 crc kubenswrapper[5122]: E0224 00:09:53.823873 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:09:54 crc kubenswrapper[5122]: I0224 00:09:54.678864 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.681953 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.712612 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.713917 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.713991 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.714012 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:09:55 crc kubenswrapper[5122]: I0224 00:09:55.714051 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:09:55 crc kubenswrapper[5122]: E0224 00:09:55.725214 5122 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 00:09:56 crc kubenswrapper[5122]: I0224 00:09:56.680475 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:57 crc kubenswrapper[5122]: I0224 00:09:57.678741 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:58 crc kubenswrapper[5122]: E0224 00:09:58.365346 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 00:09:58 crc kubenswrapper[5122]: I0224 00:09:58.679253 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:09:59 crc kubenswrapper[5122]: I0224 00:09:59.677932 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:10:00 crc kubenswrapper[5122]: I0224 00:10:00.682386 5122 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 00:10:00 crc kubenswrapper[5122]: I0224 00:10:00.924967 5122 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-rvwqs" Feb 24 00:10:00 crc kubenswrapper[5122]: I0224 00:10:00.933156 5122 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-rvwqs" Feb 24 00:10:00 crc kubenswrapper[5122]: I0224 00:10:00.967013 5122 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 00:10:01 crc kubenswrapper[5122]: I0224 00:10:01.448234 5122 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 00:10:01 crc kubenswrapper[5122]: I0224 00:10:01.934157 5122 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-26 00:05:00 +0000 UTC" deadline="2026-03-22 15:07:15.900764335 +0000 UTC" Feb 24 00:10:01 crc kubenswrapper[5122]: I0224 00:10:01.934220 5122 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="638h57m13.96654844s" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.726304 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.727552 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.727596 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.727607 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.727748 5122 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.736512 5122 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.736721 5122 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.736742 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.743783 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.743813 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.743823 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.743837 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.743847 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:02Z","lastTransitionTime":"2026-02-24T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.755351 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.762794 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.762870 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.762884 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.762902 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.762915 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:02Z","lastTransitionTime":"2026-02-24T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.772149 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.778461 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.778501 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.778516 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.778532 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.778543 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:02Z","lastTransitionTime":"2026-02-24T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.786719 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.792845 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.792901 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.792919 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.792942 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:02 crc kubenswrapper[5122]: I0224 00:10:02.792958 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:02Z","lastTransitionTime":"2026-02-24T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.801130 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.801425 5122 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.801506 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:02 crc kubenswrapper[5122]: E0224 00:10:02.901977 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.002444 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.103188 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.204252 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.304366 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.405464 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.505831 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.606974 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.707677 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.774423 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.774547 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775409 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775459 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775458 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775477 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775496 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.775506 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.775886 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.776348 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:03 crc kubenswrapper[5122]: I0224 00:10:03.776738 5122 scope.go:117] "RemoveContainer" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.808292 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.824661 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:10:03 crc kubenswrapper[5122]: E0224 00:10:03.909398 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.010205 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.110435 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.210538 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.311041 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.411394 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.512322 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.613354 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.714417 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.815341 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:04 crc kubenswrapper[5122]: E0224 00:10:04.915744 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.016136 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.068309 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.070174 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e"} Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.070437 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.070978 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.071025 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:05 crc kubenswrapper[5122]: I0224 00:10:05.071040 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.071516 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.116910 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.217252 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.317964 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.418376 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.519460 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.619861 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.720333 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.820991 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:05 crc kubenswrapper[5122]: E0224 00:10:05.921959 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.022359 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.074028 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.074636 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.076134 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" exitCode=255 Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.076206 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e"} Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.076252 5122 scope.go:117] "RemoveContainer" containerID="74c31a8b89d875649b040b49008b4aad4535fa0d067642492465a19bbb85b755" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.076574 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.077362 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.077405 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.077425 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.077962 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:06 crc kubenswrapper[5122]: I0224 00:10:06.078365 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.078645 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.122669 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.223540 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.323682 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.424509 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.525030 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.625182 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.725713 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.826349 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:06 crc kubenswrapper[5122]: E0224 00:10:06.926833 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.027357 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: I0224 00:10:07.081144 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.127530 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.227677 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.328571 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.429391 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.530303 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.631247 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.731842 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.832373 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:07 crc kubenswrapper[5122]: E0224 00:10:07.932915 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.033163 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.133962 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.234461 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.335302 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.436414 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.537466 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.638444 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.739624 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.840466 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:08 crc kubenswrapper[5122]: E0224 00:10:08.940979 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.042441 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.143408 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.244914 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.345675 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.446683 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.547057 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: I0224 00:10:09.580137 5122 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.648304 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.749313 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.849778 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:09 crc kubenswrapper[5122]: E0224 00:10:09.950699 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.051543 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.152592 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.253679 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.354869 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.456194 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.557007 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.657386 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.758632 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.859044 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:10 crc kubenswrapper[5122]: E0224 00:10:10.959983 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.060430 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.161016 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.261933 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.363062 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.464214 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.564804 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.665938 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.766777 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.867706 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:11 crc kubenswrapper[5122]: E0224 00:10:11.968717 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.069866 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.170162 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.270527 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.371508 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.471658 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.572834 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.674059 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.774200 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.865954 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.870530 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.870779 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.870979 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.871256 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.871439 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:12Z","lastTransitionTime":"2026-02-24T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.887193 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.891294 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.891496 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.891648 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.891795 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.891920 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:12Z","lastTransitionTime":"2026-02-24T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.902673 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.907042 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.907103 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.907116 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.907134 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.907149 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:12Z","lastTransitionTime":"2026-02-24T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.920437 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.924022 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.924152 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.924181 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.924213 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:12 crc kubenswrapper[5122]: I0224 00:10:12.924238 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:12Z","lastTransitionTime":"2026-02-24T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.938897 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.939177 5122 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:10:12 crc kubenswrapper[5122]: E0224 00:10:12.939224 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.039381 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.139867 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.240363 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.340670 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.441660 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.542685 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.630424 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.630812 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.632223 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.632442 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.632591 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.633358 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:13 crc kubenswrapper[5122]: I0224 00:10:13.633949 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.634476 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.643790 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.744906 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.824896 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.845444 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:13 crc kubenswrapper[5122]: E0224 00:10:13.946264 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.047540 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.148478 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.249771 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.350921 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.451534 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.552660 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.653125 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.753863 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.854280 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:14 crc kubenswrapper[5122]: E0224 00:10:14.955266 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.055735 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.071429 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.071801 5122 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.073267 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.073353 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.073373 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.074156 5122 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 24 00:10:15 crc kubenswrapper[5122]: I0224 00:10:15.074632 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.074976 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.156491 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.257307 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.357787 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.458390 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.558828 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.659298 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.759948 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.860509 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:15 crc kubenswrapper[5122]: E0224 00:10:15.980159 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.080789 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.181745 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.282798 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.384144 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.485023 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.586288 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.686992 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.787824 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.887979 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:16 crc kubenswrapper[5122]: E0224 00:10:16.988800 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.089175 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.189405 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.290326 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.391130 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.491453 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.592446 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:17 crc kubenswrapper[5122]: E0224 00:10:17.693050 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:17.793720 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:17.894090 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:17.994221 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.095058 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.195594 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.296486 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.396746 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.497139 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.598291 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.699036 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.799821 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:18 crc kubenswrapper[5122]: E0224 00:10:18.900231 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.001065 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.102177 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.203121 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.303945 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.404476 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.505107 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.606374 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.707164 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.807407 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:19 crc kubenswrapper[5122]: E0224 00:10:19.907626 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.008498 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.109572 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.210186 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.310935 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.412138 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.512530 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.613178 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.713962 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.814480 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:20 crc kubenswrapper[5122]: E0224 00:10:20.914562 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.015727 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.116814 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.217419 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.318394 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.419493 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.519657 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.620232 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.721241 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.822418 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:21 crc kubenswrapper[5122]: E0224 00:10:21.923356 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.023810 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.124452 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.225387 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.326403 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.427448 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.528575 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.629494 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.729897 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.830888 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:22 crc kubenswrapper[5122]: E0224 00:10:22.931097 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.032133 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.077655 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.081709 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.081749 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.081761 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.081776 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.081787 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:23Z","lastTransitionTime":"2026-02-24T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.091002 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.094879 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.094921 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.094952 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.094975 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.094991 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:23Z","lastTransitionTime":"2026-02-24T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.107382 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.110903 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.110937 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.110954 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.110971 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.110982 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:23Z","lastTransitionTime":"2026-02-24T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.122803 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.126584 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.126641 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.126656 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.126671 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:23 crc kubenswrapper[5122]: I0224 00:10:23.126687 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:23Z","lastTransitionTime":"2026-02-24T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.143555 5122 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c6c5e4a-ab9c-4e6a-ad00-267208aca03c\\\",\\\"systemUUID\\\":\\\"e2261f0c-b7f7-46fe-a312-4eb5967f7e40\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.143710 5122 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.143739 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.244184 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.345104 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.445877 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.546381 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.646667 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.747305 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.826219 5122 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.847736 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:23 crc kubenswrapper[5122]: E0224 00:10:23.948006 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.048536 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.148849 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: I0224 00:10:24.161722 5122 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.249065 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.350032 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.450516 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.551115 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.652086 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.753507 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.854279 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:24 crc kubenswrapper[5122]: E0224 00:10:24.954868 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.055142 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.155768 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.256343 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.356448 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.459046 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.559470 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.659555 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.760553 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.860875 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:25 crc kubenswrapper[5122]: E0224 00:10:25.962124 5122 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.021860 5122 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.064174 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.064226 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.064238 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.064255 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.064267 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.098458 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.109449 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.166604 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.166663 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.166677 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.166697 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.166710 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.210658 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.270304 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.270406 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.270441 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.270473 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.270495 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.311281 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.373399 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.373478 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.373497 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.373523 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.373577 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.412113 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.477372 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.477428 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.477447 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.477478 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.477502 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.579729 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.579789 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.579798 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.579813 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.579823 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.682406 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.682468 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.682484 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.682505 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.682520 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.704188 5122 apiserver.go:52] "Watching apiserver" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.715677 5122 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.717930 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-image-registry/node-ca-m9psk","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-jz28d","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-b4r7n","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-mr2pp","openshift-multus/multus-additional-cni-plugins-fvpr8","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7","openshift-dns/node-resolver-fx7q7","openshift-multus/network-metrics-daemon-gwpx2","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-etcd/etcd-crc"] Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.721293 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.724797 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.725191 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.725309 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.725990 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.726418 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.729380 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.729489 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.731225 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.731357 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.733201 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.733641 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.733699 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.733862 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.734602 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.734656 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.735700 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.735955 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.739456 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.742760 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.742794 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.743187 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.747632 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.757673 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.757765 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760513 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760599 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760605 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760644 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760795 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760863 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.760889 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.761136 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.761237 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.763177 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.765204 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.765337 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.765517 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.766627 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.766970 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.767342 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.770732 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.770995 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.771299 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.771313 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.772440 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.772670 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.772996 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.773188 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.776195 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.777741 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.777857 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.780088 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.780178 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.781947 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.784536 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.784572 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.784582 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.784597 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.784611 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.786215 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.788495 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.788585 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.788635 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.788870 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.789456 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.789824 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.792658 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.800465 5122 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.805840 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.817637 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.826997 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fx7q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74d9236-00a9-41f7-ab0c-581000673894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-696q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fx7q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.838321 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.845307 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fx7q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74d9236-00a9-41f7-ab0c-581000673894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-696q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fx7q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.855166 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w5q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w5q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-48fw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.856806 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.856974 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.856999 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857023 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857045 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857090 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857118 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857141 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857162 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857191 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857214 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857236 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857261 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857284 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857306 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857317 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857332 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857355 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857379 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857405 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857427 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857448 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857468 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857491 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857513 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857532 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857553 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.857550 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858152 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858437 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858515 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858585 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858615 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858665 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858682 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858718 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.858972 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.859122 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.859349 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.859477 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.859591 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.859828 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860056 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860228 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860307 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860355 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860563 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860839 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860963 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.860901 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861031 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861123 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861201 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861489 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861521 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861674 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.861698 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.361669184 +0000 UTC m=+94.451123717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861728 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861757 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861780 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861804 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861826 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861848 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861869 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861873 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861752 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861892 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.861999 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.862057 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.862142 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.862639 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.862772 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863069 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863122 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863149 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863147 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863242 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863271 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863306 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863340 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863359 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863470 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863500 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863545 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863577 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863598 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863608 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863741 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863771 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863792 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863812 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863834 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863854 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863870 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863886 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863905 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863921 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863936 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863955 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863973 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.863992 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864008 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864024 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864031 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864042 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864135 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864175 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864210 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864247 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864404 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864445 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864476 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864487 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864508 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864605 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864664 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864716 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864729 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864790 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864842 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864889 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864937 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864939 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864952 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.864986 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865035 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865126 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865182 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865236 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865284 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865332 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865339 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865377 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865425 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865454 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865473 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865521 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865560 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865567 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865631 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865671 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865698 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865797 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865838 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865873 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865909 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865969 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866013 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.865995 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866137 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866042 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866212 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866233 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866262 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866290 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866312 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866336 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866370 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866404 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866411 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866439 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866473 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866507 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866542 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866578 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867404 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867480 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867533 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867585 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867634 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867682 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867738 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867795 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867986 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.868167 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.869609 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.869952 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870189 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870263 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870316 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870375 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870304 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3839e91a-1b72-44d3-9972-02f9e328831c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvh7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fvpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870436 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870946 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871007 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871038 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871061 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871157 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871178 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871204 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871222 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871296 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871324 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871351 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871377 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871400 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871426 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871453 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871479 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872502 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872560 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872587 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872613 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872640 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872670 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872696 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872721 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872747 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872773 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872802 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872829 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872858 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872884 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872909 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872933 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866427 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872959 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866626 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866816 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866912 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866978 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.867064 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.866263 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.868510 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.868744 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.868875 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.868972 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873135 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873182 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873212 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873241 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873267 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873296 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873325 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873350 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873357 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873379 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873405 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873439 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873471 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873503 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873521 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873539 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873601 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873630 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873657 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873691 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873714 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873762 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873802 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873889 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873931 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.873966 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874012 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874053 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874227 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874448 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874465 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874589 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875039 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875033 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875272 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875297 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875312 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875340 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875382 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875414 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875596 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.875726 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.874811 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876212 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876249 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876274 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876301 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876329 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876362 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876391 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876414 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876431 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876451 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876470 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876490 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876510 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876529 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876551 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876574 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876603 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876629 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876661 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876679 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.869519 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876698 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.869627 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.869937 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870069 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870239 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870282 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870271 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870369 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870764 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870783 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.870803 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871087 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871167 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871179 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871276 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871470 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871459 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.871506 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872418 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872495 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.872702 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876857 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.876960 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877181 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877230 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877260 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877291 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877328 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877354 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877379 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877409 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877440 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877470 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877502 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877585 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-cnibin\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877615 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-os-release\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877641 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-os-release\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877663 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-socket-dir-parent\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877685 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877709 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877742 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5q6\" (UniqueName: \"kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877776 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877802 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877832 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877858 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877884 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-serviceca\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877913 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvh7h\" (UniqueName: \"kubernetes.io/projected/3839e91a-1b72-44d3-9972-02f9e328831c-kube-api-access-jvh7h\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877941 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-netns\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877963 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.877986 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878009 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-conf-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878034 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878057 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zk4n\" (UniqueName: \"kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878113 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-host\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878142 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-system-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878165 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq45p\" (UniqueName: \"kubernetes.io/projected/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-kube-api-access-xq45p\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878192 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878215 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-k8s-cni-cncf-io\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878239 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-bin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878263 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-multus\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878287 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-etc-kubernetes\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878310 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878335 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878366 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878396 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878423 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878453 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878483 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878510 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878535 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878560 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878591 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-daemon-config\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878686 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878783 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878861 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.878878 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.879003 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.879405 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.879563 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.879783 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880106 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880265 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880410 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndqr\" (UniqueName: \"kubernetes.io/projected/b5f97112-ba2a-46c0-a285-a845d2f96be9-kube-api-access-9ndqr\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880447 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d74d9236-00a9-41f7-ab0c-581000673894-hosts-file\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880469 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880488 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880485 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880717 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880871 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881110 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881291 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881590 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.880578 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881839 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881872 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-cni-binary-copy\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.881915 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.882897 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.882913 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.882949 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.883135 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.883213 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.383195076 +0000 UTC m=+94.472649589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883317 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883450 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883544 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-hostroot\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883590 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-multus-certs\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883590 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883618 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696q4\" (UniqueName: \"kubernetes.io/projected/d74d9236-00a9-41f7-ab0c-581000673894-kube-api-access-696q4\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883647 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883687 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a07a0dd1-ea17-44c0-a92f-d51bc168c592-rootfs\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883711 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883738 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883771 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-binary-copy\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883778 5122 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.883837 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884193 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884259 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrpt\" (UniqueName: \"kubernetes.io/projected/a07a0dd1-ea17-44c0-a92f-d51bc168c592-kube-api-access-gzrpt\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884303 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884333 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884333 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884337 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884360 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884389 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dqz\" (UniqueName: \"kubernetes.io/projected/ae9b0319-d6e5-4434-9036-346a520931c8-kube-api-access-f4dqz\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884414 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-system-cni-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884442 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d74d9236-00a9-41f7-ab0c-581000673894-tmp-dir\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884467 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884494 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884531 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a07a0dd1-ea17-44c0-a92f-d51bc168c592-mcd-auth-proxy-config\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884555 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884578 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884601 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884631 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884656 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-cnibin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884680 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-kubelet\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884703 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a07a0dd1-ea17-44c0-a92f-d51bc168c592-proxy-tls\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884725 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884759 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884797 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884822 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884846 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884884 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884962 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884980 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.884996 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885011 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885025 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885039 5122 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885053 5122 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885067 5122 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885099 5122 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885096 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885113 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885128 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885145 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885159 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885183 5122 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885212 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885236 5122 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885256 5122 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885275 5122 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.885281 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885296 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885564 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885766 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885797 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886065 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886152 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886167 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886181 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886197 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886320 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886507 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886580 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886597 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886620 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886756 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886825 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886852 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886860 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.886894 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.386870849 +0000 UTC m=+94.476325362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.886959 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.885289 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887187 5122 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887215 5122 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887232 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887248 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887263 5122 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887277 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887302 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887316 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887236 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jz28d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5f97112-ba2a-46c0-a285-a845d2f96be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9ndqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jz28d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887332 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887347 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887367 5122 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887380 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887393 5122 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887407 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887421 5122 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887433 5122 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887446 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887458 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887471 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887484 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887497 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887511 5122 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887537 5122 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887553 5122 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887565 5122 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887583 5122 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887598 5122 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887612 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888114 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888335 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888359 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888373 5122 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888388 5122 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888405 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888478 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888494 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888506 5122 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888519 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888537 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888550 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888563 5122 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888578 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888591 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888604 5122 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888619 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888632 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888645 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888658 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888671 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888684 5122 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888696 5122 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888708 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888721 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888736 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888749 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888762 5122 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888776 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888789 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888801 5122 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888815 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888830 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888844 5122 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888857 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888868 5122 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888929 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888945 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888958 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888973 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888985 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888998 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889012 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889025 5122 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.887184 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888919 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.888989 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889032 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889118 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889132 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.889147 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890154 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890190 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890211 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890230 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890250 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890269 5122 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890288 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890306 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890317 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890352 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890373 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890393 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890407 5122 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890424 5122 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890437 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890452 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890466 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890480 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890492 5122 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890506 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890519 5122 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890533 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890609 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890626 5122 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890639 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890692 5122 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890706 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890719 5122 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890769 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890784 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890798 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890810 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.890823 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.891394 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.891421 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.891488 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.891626 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.892156 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.899156 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:26Z","lastTransitionTime":"2026-02-24T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902800 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902835 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902862 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902875 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902844 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902984 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.902956 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.402935788 +0000 UTC m=+94.492390301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.903139 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.403111893 +0000 UTC m=+94.492566416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.904791 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905035 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905122 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905527 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905607 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905695 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905771 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.905964 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.906242 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.906440 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.906640 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.906945 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907047 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907284 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907359 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907050 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907499 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907654 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.908168 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.908196 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907891 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907978 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.908274 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.908341 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.907874 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.908930 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.909211 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.909951 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.909423 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.911220 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.913533 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.918323 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.918728 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.918857 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.918863 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.918919 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.919294 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.919320 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.919939 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920305 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920448 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920515 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920520 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920725 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.920980 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.921020 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.921274 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.921655 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.921702 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.922192 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.922303 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.922554 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.923484 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.923600 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.923504 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d197601-5dc7-4025-aee7-5bfa81d3eef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0981405f8113d4558dfdb52447b5fa4417fe60cfd98889f94fcdd01e93ed7316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0404205dedaef25e5b8b2bad3ce06a78967e0ee0a405a9cdf31c2355a229ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fb1a626a85873c32d3e6fd0269911dfe59a972075effd239133fc641726b461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b96a9424f5348939a10e92657d35277dbd8158c886cd7153f52b0f6e04f4027e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ee57fc103a97dd704af80ae8a445f763395baba3377d997f162f872b97c2d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a710b6cd0aa59798f0a69c269a8b24d8b4f17ea6a579e90b47bb202c6169f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a710b6cd0aa59798f0a69c269a8b24d8b4f17ea6a579e90b47bb202c6169f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://95e7c8780e3115cf933d38710ac800c34a21519bb26dde3726004c5d61525982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95e7c8780e3115cf933d38710ac800c34a21519bb26dde3726004c5d61525982\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://26bcfa7e06bd18bc803b0a2c36b6d8824438deb1b4f449ab966718da5ed09c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26bcfa7e06bd18bc803b0a2c36b6d8824438deb1b4f449ab966718da5ed09c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:08:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.923680 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.926281 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.926671 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.926757 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.926944 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.927293 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.927524 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.927969 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.928178 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.928267 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.928557 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.928757 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929190 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929314 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929497 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929597 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929648 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.929800 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.930766 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.936262 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b0a35ad-c3da-4754-8842-c052ad912e2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cfbf4f7e6544aaa90a5b7583d6b85e287ed0d459941edf55d5ac1fda8a1c905a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd66379a5e0fec18bb00729a9f9015cac040f0c1bc1927f73a7a5603f8d6fe10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e62718d5fa1a2c8d163a016ae2607ec93029e94464ebf0518d890c39534e4b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0224 00:10:04.635429 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0224 00:10:04.635574 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0224 00:10:04.636228 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-365932288/tls.crt::/tmp/serving-cert-365932288/tls.key\\\\\\\"\\\\nI0224 00:10:05.034377 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 00:10:05.086824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 00:10:05.086861 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 00:10:05.086903 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 00:10:05.086913 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 00:10:05.094629 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0224 00:10:05.094675 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0224 00:10:05.094689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 00:10:05.094703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 00:10:05.094716 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 00:10:05.094725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 00:10:05.094734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 00:10:05.094742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0224 00:10:05.095775 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:10:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a68c1527a3daaf2edd8a58adc3928d53f63266e661d665d090ae7d0850e50d2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e2e508b94b0720c8553587b8cfb2f3ad7a5265f46b8e90239d02595822736e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2e508b94b0720c8553587b8cfb2f3ad7a5265f46b8e90239d02595822736e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:08:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.947465 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.947514 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f678b1d-f25a-4efc-86ae-1ddb04ca061f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d6e44643d210ceb4920fcf1c5f1494b26afa004de1f7c12988c9d7999a371048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a372e08af69843d3b18e6e4c24c44395e0da1c0a36d6b93fbba0efeebdce5cd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a372e08af69843d3b18e6e4c24c44395e0da1c0a36d6b93fbba0efeebdce5cd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:08:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.956220 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.960544 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.963931 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.964641 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.976780 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ea2c06-ac71-4ff2-aba9-54e26871039e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4zk4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b4r7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.984542 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a2b15e-0a81-484c-819c-80c8358e28f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c8818558feb94b0d67e95aa2cddd5d1293d6c4d3b927db398b4dedc3dbe6e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd870d8a5266d17b821eea88d085de06b8be9f1ffb9d281f7f78e4e68bcf7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30d7deb84151dc4c7e62cf03ab1e321de8aae77f535fd6edaaa05fb92be7de9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2415088a0b144b364e014dc2c2793295fa3acf33fdf864215058c6e2fc074ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:08:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992158 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992475 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-system-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992501 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xq45p\" (UniqueName: \"kubernetes.io/projected/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-kube-api-access-xq45p\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992516 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992532 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-k8s-cni-cncf-io\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992546 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-bin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992560 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-multus\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992574 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-etc-kubernetes\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992622 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-multus\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992718 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-system-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992807 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-etc-kubernetes\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992849 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-k8s-cni-cncf-io\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992881 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-cni-bin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992912 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.992945 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993012 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993046 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993063 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993112 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-daemon-config\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993211 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993300 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993342 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndqr\" (UniqueName: \"kubernetes.io/projected/b5f97112-ba2a-46c0-a285-a845d2f96be9-kube-api-access-9ndqr\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993376 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d74d9236-00a9-41f7-ab0c-581000673894-hosts-file\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993399 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993421 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993455 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993477 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-cni-binary-copy\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993501 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-hostroot\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993524 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-multus-certs\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993546 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-696q4\" (UniqueName: \"kubernetes.io/projected/d74d9236-00a9-41f7-ab0c-581000673894-kube-api-access-696q4\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993570 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993598 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a07a0dd1-ea17-44c0-a92f-d51bc168c592-rootfs\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993618 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993639 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993670 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-binary-copy\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993696 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrpt\" (UniqueName: \"kubernetes.io/projected/a07a0dd1-ea17-44c0-a92f-d51bc168c592-kube-api-access-gzrpt\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993722 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993743 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dqz\" (UniqueName: \"kubernetes.io/projected/ae9b0319-d6e5-4434-9036-346a520931c8-kube-api-access-f4dqz\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993767 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-system-cni-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993788 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d74d9236-00a9-41f7-ab0c-581000673894-tmp-dir\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993794 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993815 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993833 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-daemon-config\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993871 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.993907 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.994017 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.994151 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.994852 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-hostroot\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995118 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-cni-binary-copy\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995127 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d74d9236-00a9-41f7-ab0c-581000673894-hosts-file\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995462 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-system-cni-dir\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995485 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995563 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-multus-certs\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995565 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995617 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a07a0dd1-ea17-44c0-a92f-d51bc168c592-mcd-auth-proxy-config\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995620 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995641 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995664 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995687 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995746 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995769 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-cnibin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995811 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-cnibin\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995842 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995872 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995901 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.995953 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-cni-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.996199 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: E0224 00:10:26.996285 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:27.496262439 +0000 UTC m=+94.585716952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996453 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a07a0dd1-ea17-44c0-a92f-d51bc168c592-rootfs\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996516 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996551 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-kubelet\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996917 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a07a0dd1-ea17-44c0-a92f-d51bc168c592-proxy-tls\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996950 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996983 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997014 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997037 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997088 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-cnibin\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997120 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-os-release\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997147 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-os-release\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997173 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-socket-dir-parent\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997197 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997225 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997259 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2w5q6\" (UniqueName: \"kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997290 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997307 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.996579 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-var-lib-kubelet\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997339 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997368 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-serviceca\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997399 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvh7h\" (UniqueName: \"kubernetes.io/projected/3839e91a-1b72-44d3-9972-02f9e328831c-kube-api-access-jvh7h\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997427 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-netns\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997452 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997494 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997526 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-conf-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997556 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4zk4n\" (UniqueName: \"kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997584 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-host\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997774 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997799 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997813 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997827 5122 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997840 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997858 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997871 5122 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997883 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997898 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997912 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997924 5122 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997936 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997952 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997964 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997976 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.997988 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998005 5122 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998018 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998030 5122 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998041 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998058 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998087 5122 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998099 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998118 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998130 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998144 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998155 5122 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998171 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998171 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998183 5122 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998223 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-host\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998238 5122 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998284 5122 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998306 5122 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998320 5122 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998333 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998321 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-cnibin\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998347 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998364 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998378 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998391 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998405 5122 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998412 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3839e91a-1b72-44d3-9972-02f9e328831c-os-release\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998431 5122 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998461 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998473 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-os-release\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998481 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998504 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998529 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-socket-dir-parent\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998531 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998555 5122 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998570 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998575 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998612 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998625 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998636 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998631 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998648 5122 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998684 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998880 5122 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998895 5122 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998948 5122 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.998961 5122 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999021 5122 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999036 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999054 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999083 5122 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999096 5122 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999114 5122 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999128 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999140 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999153 5122 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999174 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999188 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999201 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999265 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999286 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999298 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999313 5122 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999327 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999345 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999357 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999371 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999388 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999403 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999418 5122 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999435 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999456 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999469 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999485 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:26 crc kubenswrapper[5122]: I0224 00:10:26.999498 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999516 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999529 5122 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999542 5122 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999559 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999571 5122 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999584 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999596 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999612 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999624 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999636 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999648 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999663 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999675 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:26.999785 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.000154 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.000201 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.003050 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.003147 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-multus-conf-dir\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.003432 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5f97112-ba2a-46c0-a285-a845d2f96be9-host-run-netns\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.005153 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.005179 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.005196 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.005238 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.005249 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.007893 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.008343 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07a0dd1-ea17-44c0-a92f-d51bc168c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzrpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzrpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mr2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.008393 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d74d9236-00a9-41f7-ab0c-581000673894-tmp-dir\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.008592 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5f97112-ba2a-46c0-a285-a845d2f96be9-cni-binary-copy\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.008809 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a07a0dd1-ea17-44c0-a92f-d51bc168c592-mcd-auth-proxy-config\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.009462 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.010681 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.011306 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.011798 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrpt\" (UniqueName: \"kubernetes.io/projected/a07a0dd1-ea17-44c0-a92f-d51bc168c592-kube-api-access-gzrpt\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.012129 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a07a0dd1-ea17-44c0-a92f-d51bc168c592-proxy-tls\") pod \"machine-config-daemon-mr2pp\" (UID: \"a07a0dd1-ea17-44c0-a92f-d51bc168c592\") " pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.014060 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndqr\" (UniqueName: \"kubernetes.io/projected/b5f97112-ba2a-46c0-a285-a845d2f96be9-kube-api-access-9ndqr\") pod \"multus-jz28d\" (UID: \"b5f97112-ba2a-46c0-a285-a845d2f96be9\") " pod="openshift-multus/multus-jz28d" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.016022 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.017535 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gwpx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae9b0319-d6e5-4434-9036-346a520931c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4dqz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4dqz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gwpx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.019869 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w5q6\" (UniqueName: \"kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6\") pod \"ovnkube-control-plane-57b78d8988-48fw7\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.020478 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3839e91a-1b72-44d3-9972-02f9e328831c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.021482 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zk4n\" (UniqueName: \"kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n\") pod \"ovnkube-node-b4r7n\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.021483 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvh7h\" (UniqueName: \"kubernetes.io/projected/3839e91a-1b72-44d3-9972-02f9e328831c-kube-api-access-jvh7h\") pod \"multus-additional-cni-plugins-fvpr8\" (UID: \"3839e91a-1b72-44d3-9972-02f9e328831c\") " pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.021860 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-696q4\" (UniqueName: \"kubernetes.io/projected/d74d9236-00a9-41f7-ab0c-581000673894-kube-api-access-696q4\") pod \"node-resolver-fx7q7\" (UID: \"d74d9236-00a9-41f7-ab0c-581000673894\") " pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.022108 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-serviceca\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.022350 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq45p\" (UniqueName: \"kubernetes.io/projected/ff60bb2a-ec51-46fd-b136-baab6ed82f1e-kube-api-access-xq45p\") pod \"node-ca-m9psk\" (UID: \"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\") " pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.023155 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dqz\" (UniqueName: \"kubernetes.io/projected/ae9b0319-d6e5-4434-9036-346a520931c8-kube-api-access-f4dqz\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.027166 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09e4aff-eb2c-4550-a6ca-fb1d97c65b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dde031f80d706aaad533a8ae7343d88019c52241161581110e7c1dd1e0a210a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://363f5c9ff48c3fdaf5b9a6cc53eec30e0a9a336cc8aa985ae3d895d4b1090acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00d43efa38f5033cfa155c64f6d684c0248152b40fa12b642aaeab2cd652b289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b58dd6893e4cef4d951983cf64df766277d25e7d254fdb9da4768cc60541d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b58dd6893e4cef4d951983cf64df766277d25e7d254fdb9da4768cc60541d49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:08:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:08:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.038038 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.039560 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.048108 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.049253 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.056878 5122 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m9psk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff60bb2a-ec51-46fd-b136-baab6ed82f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq45p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m9psk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.058986 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.062267 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-91b237800b68bc63e5060b9afe4b64b83691ee4a916397d70a8de03e0397fd89 WatchSource:0}: Error finding container 91b237800b68bc63e5060b9afe4b64b83691ee4a916397d70a8de03e0397fd89: Status 404 returned error can't find the container with id 91b237800b68bc63e5060b9afe4b64b83691ee4a916397d70a8de03e0397fd89 Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.072311 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-bcb0e415af554b6af6df95d0744cf1e2f07201b750d4c5f6fbeb8114f70677df WatchSource:0}: Error finding container bcb0e415af554b6af6df95d0744cf1e2f07201b750d4c5f6fbeb8114f70677df: Status 404 returned error can't find the container with id bcb0e415af554b6af6df95d0744cf1e2f07201b750d4c5f6fbeb8114f70677df Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.073334 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fx7q7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.081759 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.093561 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd74d9236_00a9_41f7_ab0c_581000673894.slice/crio-d32d98ea762544a2c6b4f9e07a7c7d643e454a7fac08553f127cd067368cd2dd WatchSource:0}: Error finding container d32d98ea762544a2c6b4f9e07a7c7d643e454a7fac08553f127cd067368cd2dd: Status 404 returned error can't find the container with id d32d98ea762544a2c6b4f9e07a7c7d643e454a7fac08553f127cd067368cd2dd Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.095352 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jz28d" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.105874 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.108887 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.108918 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.108927 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.108941 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.108950 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.109416 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3839e91a_1b72_44d3_9972_02f9e328831c.slice/crio-bb7cf7dfe942f074930740a8d51278733afb2d2f16891babb52b4e4bca5d3071 WatchSource:0}: Error finding container bb7cf7dfe942f074930740a8d51278733afb2d2f16891babb52b4e4bca5d3071: Status 404 returned error can't find the container with id bb7cf7dfe942f074930740a8d51278733afb2d2f16891babb52b4e4bca5d3071 Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.114554 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.125596 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.133101 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m9psk" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.136392 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerStarted","Data":"bb7cf7dfe942f074930740a8d51278733afb2d2f16891babb52b4e4bca5d3071"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.136920 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jz28d" event={"ID":"b5f97112-ba2a-46c0-a285-a845d2f96be9","Type":"ContainerStarted","Data":"288da48509271ddfa51b81a6da1fce55f6fdb52f8f50cb2af8fbac0b7be960b1"} Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.137832 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ea2c06_ac71_4ff2_aba9_54e26871039e.slice/crio-e04d77c6147dad4200aa5e175c277ec89bbd7f0e8770e58347edb6da6dbebf98 WatchSource:0}: Error finding container e04d77c6147dad4200aa5e175c277ec89bbd7f0e8770e58347edb6da6dbebf98: Status 404 returned error can't find the container with id e04d77c6147dad4200aa5e175c277ec89bbd7f0e8770e58347edb6da6dbebf98 Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.137885 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fx7q7" event={"ID":"d74d9236-00a9-41f7-ab0c-581000673894","Type":"ContainerStarted","Data":"d32d98ea762544a2c6b4f9e07a7c7d643e454a7fac08553f127cd067368cd2dd"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.138897 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"bcb0e415af554b6af6df95d0744cf1e2f07201b750d4c5f6fbeb8114f70677df"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.139683 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f08c9ef6a1e8785ac237ced73bf565e15fb3ebcec5ac4bb6616f2121fe579d8f"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.141324 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"91b237800b68bc63e5060b9afe4b64b83691ee4a916397d70a8de03e0397fd89"} Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.146413 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f5a8e7_4852_4e7b_8dca_ce9f9facfe85.slice/crio-306fa3f2b6c3715596ad445fa4eb619d877e86fbb86e477d60c9e18cd4bdcc4d WatchSource:0}: Error finding container 306fa3f2b6c3715596ad445fa4eb619d877e86fbb86e477d60c9e18cd4bdcc4d: Status 404 returned error can't find the container with id 306fa3f2b6c3715596ad445fa4eb619d877e86fbb86e477d60c9e18cd4bdcc4d Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.149095 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07a0dd1_ea17_44c0_a92f_d51bc168c592.slice/crio-416cd4c2ef3e344efc1228a1069f6672315c10371139cfd4b1593d40b67361fd WatchSource:0}: Error finding container 416cd4c2ef3e344efc1228a1069f6672315c10371139cfd4b1593d40b67361fd: Status 404 returned error can't find the container with id 416cd4c2ef3e344efc1228a1069f6672315c10371139cfd4b1593d40b67361fd Feb 24 00:10:27 crc kubenswrapper[5122]: W0224 00:10:27.162644 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff60bb2a_ec51_46fd_b136_baab6ed82f1e.slice/crio-ddf733ddbd3378c400852dfa9ebd11f48d16f19632c779209782ff516b6fea04 WatchSource:0}: Error finding container ddf733ddbd3378c400852dfa9ebd11f48d16f19632c779209782ff516b6fea04: Status 404 returned error can't find the container with id ddf733ddbd3378c400852dfa9ebd11f48d16f19632c779209782ff516b6fea04 Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.210221 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.210255 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.210265 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.210277 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.210287 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.312911 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.312956 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.312966 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.312982 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.312991 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.403581 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.403707 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403742 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.403714217 +0000 UTC m=+95.493168730 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.403789 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403810 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.403816 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.403840 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403822 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403910 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403920 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403947 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.403936774 +0000 UTC m=+95.493391287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403864 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403962 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.403955234 +0000 UTC m=+95.493409747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403968 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403978 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.404005 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.403996245 +0000 UTC m=+95.493450758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.403892 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.404038 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.404029626 +0000 UTC m=+95.493484139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.416350 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.416392 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.416406 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.416422 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.416434 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.505010 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.505145 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.505219 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:28.505199637 +0000 UTC m=+95.594654160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.518394 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.518426 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.518435 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.518448 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.518457 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.620979 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.621017 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.621026 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.621039 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.621048 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.724408 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.724453 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.724462 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.724475 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.724486 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.774540 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:27 crc kubenswrapper[5122]: E0224 00:10:27.774677 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.780600 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.781787 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.783848 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.786356 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.790274 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.794438 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.796052 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.797866 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.798600 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.800152 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.801914 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.804686 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.805590 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.808638 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.809170 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.810730 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.811517 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.813232 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.814512 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.816991 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.818695 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.821965 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.823260 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.825423 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826648 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826703 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826720 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826742 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826758 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.826817 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.827726 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.835547 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.836363 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.840700 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.841606 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.844109 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.845293 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.847513 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.849211 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.849907 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.850869 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.851910 5122 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.852012 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.855499 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.862106 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.866378 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.870111 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.870833 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.873146 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.874418 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.875151 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.876918 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.879000 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.881286 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.882795 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.885187 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.887675 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.890428 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.893558 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.898472 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.901425 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.902737 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.904400 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.929678 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.929725 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.929739 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.929757 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:27 crc kubenswrapper[5122]: I0224 00:10:27.929769 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:27Z","lastTransitionTime":"2026-02-24T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.031714 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.031775 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.031791 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.031811 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.031826 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.134197 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.134237 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.134247 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.134261 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.134270 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.146916 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" exitCode=0 Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.146994 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.147024 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"e04d77c6147dad4200aa5e175c277ec89bbd7f0e8770e58347edb6da6dbebf98"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.149032 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"a5f092ce69514a23376c0933e1fc6c5ac608427056aa3bdbbeb09e2f3b1104f4"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.149056 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.149065 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"416cd4c2ef3e344efc1228a1069f6672315c10371139cfd4b1593d40b67361fd"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.151123 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerStarted","Data":"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.151168 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerStarted","Data":"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.151187 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerStarted","Data":"306fa3f2b6c3715596ad445fa4eb619d877e86fbb86e477d60c9e18cd4bdcc4d"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.152607 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fx7q7" event={"ID":"d74d9236-00a9-41f7-ab0c-581000673894","Type":"ContainerStarted","Data":"0e23d5e4872093e4322f93bcfa56e92c31580286f5aa99ece103ae56ec181e11"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.153943 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"872b1719c35579b359f06e976238350cba540e95be183c5db6ae6367334fbf5d"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.153969 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d27d7f9e5fee36672d155b3cf0b4be284cf2cdc08a0d3ac2399a4db9d609cedd"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.155606 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m9psk" event={"ID":"ff60bb2a-ec51-46fd-b136-baab6ed82f1e","Type":"ContainerStarted","Data":"c41c3d7efe6e3af424d8a597f8456b90fb3ebc7e4fe0c37ce512edd74fa3999b"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.155629 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m9psk" event={"ID":"ff60bb2a-ec51-46fd-b136-baab6ed82f1e","Type":"ContainerStarted","Data":"ddf733ddbd3378c400852dfa9ebd11f48d16f19632c779209782ff516b6fea04"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.156860 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="a49c887ded7f21c829c75a273e9792db250d63844a4dafcc68398fd2566feb8f" exitCode=0 Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.156917 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"a49c887ded7f21c829c75a273e9792db250d63844a4dafcc68398fd2566feb8f"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.158203 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jz28d" event={"ID":"b5f97112-ba2a-46c0-a285-a845d2f96be9","Type":"ContainerStarted","Data":"6ae8c4088d8f6d9782d4238d3f74f219f9f4ebb6252995e8203d6f7002583268"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.159721 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"90ca692570397de951b48749ae35467f58b578b13de687b5d7576ecaaf9326a9"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.180966 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.180953802 podStartE2EDuration="2.180953802s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.180818238 +0000 UTC m=+95.270272761" watchObservedRunningTime="2026-02-24 00:10:28.180953802 +0000 UTC m=+95.270408315" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.241561 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.241613 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.241632 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.241650 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.241662 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.270351 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.270335002 podStartE2EDuration="2.270335002s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.255153068 +0000 UTC m=+95.344607571" watchObservedRunningTime="2026-02-24 00:10:28.270335002 +0000 UTC m=+95.359789515" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.343564 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.343609 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.343622 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.343639 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.343651 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.415200 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.415170984 +0000 UTC m=+97.504625507 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.415484 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.415660 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.415715 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.415740 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.415764 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.415920 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.415975 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.415964546 +0000 UTC m=+97.505419059 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416453 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416478 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416492 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416527 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.416516642 +0000 UTC m=+97.505971235 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416583 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416593 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416602 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416630 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.416621025 +0000 UTC m=+97.506075538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416664 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.416683 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.416677476 +0000 UTC m=+97.506131989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.437807 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.437793677 podStartE2EDuration="2.437793677s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.400271877 +0000 UTC m=+95.489726430" watchObservedRunningTime="2026-02-24 00:10:28.437793677 +0000 UTC m=+95.527248190" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.444852 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.444888 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.444899 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.444913 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.444924 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.456631 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.456610434 podStartE2EDuration="2.456610434s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.456013857 +0000 UTC m=+95.545468380" watchObservedRunningTime="2026-02-24 00:10:28.456610434 +0000 UTC m=+95.546064947" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.516226 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.516404 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.516499 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:30.516479858 +0000 UTC m=+97.605934371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.546621 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.546664 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.546675 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.546690 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.546700 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.560045 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m9psk" podStartSLOduration=72.560028607 podStartE2EDuration="1m12.560028607s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.559620955 +0000 UTC m=+95.649075488" watchObservedRunningTime="2026-02-24 00:10:28.560028607 +0000 UTC m=+95.649483140" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.584615 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fx7q7" podStartSLOduration=74.584598844 podStartE2EDuration="1m14.584598844s" podCreationTimestamp="2026-02-24 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.583936706 +0000 UTC m=+95.673391229" watchObservedRunningTime="2026-02-24 00:10:28.584598844 +0000 UTC m=+95.674053357" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.597129 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" podStartSLOduration=72.597108864 podStartE2EDuration="1m12.597108864s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.596345823 +0000 UTC m=+95.685800356" watchObservedRunningTime="2026-02-24 00:10:28.597108864 +0000 UTC m=+95.686563397" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.655396 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.655435 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.655483 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.655500 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.655512 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.656964 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jz28d" podStartSLOduration=72.656954558 podStartE2EDuration="1m12.656954558s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.640886159 +0000 UTC m=+95.730340672" watchObservedRunningTime="2026-02-24 00:10:28.656954558 +0000 UTC m=+95.746409071" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.757455 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.757493 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.757502 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.757518 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.757527 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.775384 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.775679 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.775879 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.775748 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.776020 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:28 crc kubenswrapper[5122]: E0224 00:10:28.776150 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.860113 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.860164 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.860176 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.860194 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.860206 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.962291 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.962338 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.962350 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.962366 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:28 crc kubenswrapper[5122]: I0224 00:10:28.962377 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:28Z","lastTransitionTime":"2026-02-24T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.064562 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.064609 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.064621 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.064637 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.064648 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.165600 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerStarted","Data":"6c9e9d5431dbbdbb5c3ae7e219982248b46e3baf0aadf9eb2095ec5985e22a06"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.169883 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.169924 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.169937 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.169954 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.169966 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.171549 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.171582 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.191062 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podStartSLOduration=73.19103893 podStartE2EDuration="1m13.19103893s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:28.657860654 +0000 UTC m=+95.747315177" watchObservedRunningTime="2026-02-24 00:10:29.19103893 +0000 UTC m=+96.280493453" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.271707 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.271743 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.271752 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.271764 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.271773 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.373299 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.373332 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.373340 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.373353 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.373362 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.482287 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.482608 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.482621 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.482636 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.482647 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.585365 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.585447 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.585466 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.585509 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.585526 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.687855 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.687919 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.687939 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.687962 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.687982 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.773969 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:29 crc kubenswrapper[5122]: E0224 00:10:29.774118 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.790044 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.790128 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.790146 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.790167 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.790183 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.891884 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.891937 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.891950 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.891970 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.891983 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.994175 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.994222 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.994231 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.994246 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:29 crc kubenswrapper[5122]: I0224 00:10:29.994256 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:29Z","lastTransitionTime":"2026-02-24T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.095949 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.096016 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.096033 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.096052 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.096065 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.176509 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="6c9e9d5431dbbdbb5c3ae7e219982248b46e3baf0aadf9eb2095ec5985e22a06" exitCode=0 Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.176612 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"6c9e9d5431dbbdbb5c3ae7e219982248b46e3baf0aadf9eb2095ec5985e22a06"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.184718 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.184804 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.184840 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.184864 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.187833 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"9fc3e6840c52e018336fc83e6088dd7898ddb5def706dbf58e2cfbfda962965a"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.197593 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.197653 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.197672 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.197697 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.197716 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.299648 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.299694 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.299706 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.299725 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.299739 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.401314 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.401347 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.401356 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.401369 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.401379 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.439502 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.439590 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.439682 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.439633691 +0000 UTC m=+101.529088204 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.439733 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.439752 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.439764 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.439828 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.439811926 +0000 UTC m=+101.529266449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.439971 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.440001 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.440020 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440127 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440179 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.440164656 +0000 UTC m=+101.529619189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440205 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440242 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.440231338 +0000 UTC m=+101.529685851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440271 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440300 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440319 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.440395 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.440377722 +0000 UTC m=+101.529832265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.502982 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.503255 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.503264 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.503277 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.503286 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.541252 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.541497 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.541608 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:34.541581833 +0000 UTC m=+101.631036376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.605696 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.605746 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.605762 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.605778 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.605788 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.708380 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.708444 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.708463 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.708488 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.708507 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.774843 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.774885 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.775059 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.775552 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.775726 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:30 crc kubenswrapper[5122]: E0224 00:10:30.775890 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.810822 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.810875 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.810889 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.810939 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.810952 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.913022 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.913151 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.913179 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.913211 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:30 crc kubenswrapper[5122]: I0224 00:10:30.913293 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:30Z","lastTransitionTime":"2026-02-24T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.015460 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.015531 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.015555 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.015587 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.015613 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.118245 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.118310 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.118328 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.118396 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.118438 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.195804 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="d76e3e5b3b5e1ed1e1c5db6be5fa92c1a17e76a60403a43ddc4d42655f9f6285" exitCode=0 Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.195965 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"d76e3e5b3b5e1ed1e1c5db6be5fa92c1a17e76a60403a43ddc4d42655f9f6285"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.221271 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.221325 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.221341 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.221361 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.221376 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.324755 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.324801 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.324820 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.324839 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.324851 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.426966 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.427002 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.427013 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.427029 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.427040 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.528850 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.528889 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.528898 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.528916 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.528926 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.631913 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.631970 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.631990 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.632013 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.632029 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.733894 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.733935 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.733947 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.733964 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.733976 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.779592 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:31 crc kubenswrapper[5122]: E0224 00:10:31.779731 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.836296 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.836347 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.836360 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.836378 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.836390 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.942212 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.942274 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.942286 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.942304 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:31 crc kubenswrapper[5122]: I0224 00:10:31.942317 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:31Z","lastTransitionTime":"2026-02-24T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.044720 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.044860 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.044881 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.044912 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.044932 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.147395 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.147444 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.147458 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.147478 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.147491 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.205381 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="d47364d597f90203cacc953c3d2fdddd8a2174bb402660aacfe3479228b44f3a" exitCode=0 Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.205473 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"d47364d597f90203cacc953c3d2fdddd8a2174bb402660aacfe3479228b44f3a"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.213051 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.251877 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.251963 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.251985 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.252015 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.252037 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.360903 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.360972 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.360990 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.361016 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.361036 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.466939 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.466995 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.467013 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.467037 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.467054 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.571646 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.571700 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.571713 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.571732 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.571745 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.674222 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.674320 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.674408 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.674443 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.674467 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.773869 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.773918 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.774035 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:32 crc kubenswrapper[5122]: E0224 00:10:32.774030 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:32 crc kubenswrapper[5122]: E0224 00:10:32.774168 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:32 crc kubenswrapper[5122]: E0224 00:10:32.774265 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.775909 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.775949 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.775960 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.775976 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.775984 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.877954 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.877985 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.877996 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.878012 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.878023 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.981302 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.981731 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.981746 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.981766 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:32 crc kubenswrapper[5122]: I0224 00:10:32.981782 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:32Z","lastTransitionTime":"2026-02-24T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.084279 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.084378 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.084404 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.084435 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.084454 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.186820 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.186880 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.186895 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.186918 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.186932 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.218613 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerStarted","Data":"ad6607a5140072c7a43fa258b7327469db8df31c0bc76ed157b9288ecfb48545"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.288888 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.288921 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.288930 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.288941 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.288950 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.390937 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.390970 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.390978 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.390991 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.391000 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.493491 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.493540 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.493556 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.493580 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.493594 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.542555 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.542639 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.542662 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.542694 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.542716 5122 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:10:33Z","lastTransitionTime":"2026-02-24T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.586862 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf"] Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.595694 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.598035 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.598039 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.598215 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.599638 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.677544 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.677613 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.677639 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6652393b-3417-44f3-bb11-7bc5ec82877d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.677661 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6652393b-3417-44f3-bb11-7bc5ec82877d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.677867 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6652393b-3417-44f3-bb11-7bc5ec82877d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.760297 5122 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.771324 5122 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.779798 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.779928 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.779978 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6652393b-3417-44f3-bb11-7bc5ec82877d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.780009 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6652393b-3417-44f3-bb11-7bc5ec82877d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.780008 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.780178 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6652393b-3417-44f3-bb11-7bc5ec82877d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.780548 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6652393b-3417-44f3-bb11-7bc5ec82877d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.780738 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:33 crc kubenswrapper[5122]: E0224 00:10:33.780942 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.781714 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6652393b-3417-44f3-bb11-7bc5ec82877d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.790019 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6652393b-3417-44f3-bb11-7bc5ec82877d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.806472 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6652393b-3417-44f3-bb11-7bc5ec82877d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-l4bjf\" (UID: \"6652393b-3417-44f3-bb11-7bc5ec82877d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: I0224 00:10:33.907642 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" Feb 24 00:10:33 crc kubenswrapper[5122]: W0224 00:10:33.920732 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6652393b_3417_44f3_bb11_7bc5ec82877d.slice/crio-83f531dcb7db7ae256fc6decef39b3e514dfcde5dce2f62ef0df0d8b8f4c6ddf WatchSource:0}: Error finding container 83f531dcb7db7ae256fc6decef39b3e514dfcde5dce2f62ef0df0d8b8f4c6ddf: Status 404 returned error can't find the container with id 83f531dcb7db7ae256fc6decef39b3e514dfcde5dce2f62ef0df0d8b8f4c6ddf Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.225556 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="ad6607a5140072c7a43fa258b7327469db8df31c0bc76ed157b9288ecfb48545" exitCode=0 Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.225604 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"ad6607a5140072c7a43fa258b7327469db8df31c0bc76ed157b9288ecfb48545"} Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.227002 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" event={"ID":"6652393b-3417-44f3-bb11-7bc5ec82877d","Type":"ContainerStarted","Data":"83f531dcb7db7ae256fc6decef39b3e514dfcde5dce2f62ef0df0d8b8f4c6ddf"} Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.488475 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.488570 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.488598 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.488613 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488682 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488680 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.488622595 +0000 UTC m=+109.578077108 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488732 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488755 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488771 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488810 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488855 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.488832181 +0000 UTC m=+109.578286694 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488883 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.488873792 +0000 UTC m=+109.578328415 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.488906 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.488895723 +0000 UTC m=+109.578350236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.489029 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.489181 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.489207 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.489217 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.489260 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.489250623 +0000 UTC m=+109.578705206 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.589826 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.589986 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.590062 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:42.590041202 +0000 UTC m=+109.679495725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.774204 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.774339 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.774699 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.774770 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:34 crc kubenswrapper[5122]: I0224 00:10:34.774826 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:34 crc kubenswrapper[5122]: E0224 00:10:34.774883 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.233463 5122 generic.go:358] "Generic (PLEG): container finished" podID="3839e91a-1b72-44d3-9972-02f9e328831c" containerID="2449fae28477325b1dea6e6aa287c7c3d2975ad542b124ec48866e3fa2caa26c" exitCode=0 Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.233551 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerDied","Data":"2449fae28477325b1dea6e6aa287c7c3d2975ad542b124ec48866e3fa2caa26c"} Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.238899 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerStarted","Data":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.240338 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" event={"ID":"6652393b-3417-44f3-bb11-7bc5ec82877d","Type":"ContainerStarted","Data":"9c84dfa238c44e81ff3a0d9d35a32276304ac6404218aa79fb137c8924ae8026"} Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.263141 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.263194 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.286769 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podStartSLOduration=79.286742234 podStartE2EDuration="1m19.286742234s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:35.286656071 +0000 UTC m=+102.376110614" watchObservedRunningTime="2026-02-24 00:10:35.286742234 +0000 UTC m=+102.376196757" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.293764 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.301935 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-l4bjf" podStartSLOduration=79.301920278 podStartE2EDuration="1m19.301920278s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:35.301459715 +0000 UTC m=+102.390914248" watchObservedRunningTime="2026-02-24 00:10:35.301920278 +0000 UTC m=+102.391374801" Feb 24 00:10:35 crc kubenswrapper[5122]: I0224 00:10:35.779360 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:35 crc kubenswrapper[5122]: E0224 00:10:35.779512 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.250108 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" event={"ID":"3839e91a-1b72-44d3-9972-02f9e328831c","Type":"ContainerStarted","Data":"4c3f448007f0a922ec32424bd82a559efb04ead008d460c923328f952d91bd50"} Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.250682 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.287797 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fvpr8" podStartSLOduration=80.287758288 podStartE2EDuration="1m20.287758288s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:36.286845393 +0000 UTC m=+103.376299996" watchObservedRunningTime="2026-02-24 00:10:36.287758288 +0000 UTC m=+103.377212841" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.305633 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.372852 5122 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.774263 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.774282 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:36 crc kubenswrapper[5122]: I0224 00:10:36.774301 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:36 crc kubenswrapper[5122]: E0224 00:10:36.774657 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:36 crc kubenswrapper[5122]: E0224 00:10:36.774532 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:36 crc kubenswrapper[5122]: E0224 00:10:36.774765 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:37 crc kubenswrapper[5122]: I0224 00:10:37.044581 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gwpx2"] Feb 24 00:10:37 crc kubenswrapper[5122]: I0224 00:10:37.253176 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:37 crc kubenswrapper[5122]: E0224 00:10:37.253316 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:37 crc kubenswrapper[5122]: I0224 00:10:37.780326 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:37 crc kubenswrapper[5122]: E0224 00:10:37.780512 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:38 crc kubenswrapper[5122]: I0224 00:10:38.774816 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:38 crc kubenswrapper[5122]: E0224 00:10:38.775236 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:38 crc kubenswrapper[5122]: I0224 00:10:38.775338 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:38 crc kubenswrapper[5122]: I0224 00:10:38.775388 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:38 crc kubenswrapper[5122]: E0224 00:10:38.775474 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:38 crc kubenswrapper[5122]: I0224 00:10:38.775530 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:38 crc kubenswrapper[5122]: E0224 00:10:38.775569 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 24 00:10:38 crc kubenswrapper[5122]: E0224 00:10:38.775613 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:39 crc kubenswrapper[5122]: I0224 00:10:39.774439 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:39 crc kubenswrapper[5122]: E0224 00:10:39.774570 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:40 crc kubenswrapper[5122]: I0224 00:10:40.774579 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:40 crc kubenswrapper[5122]: I0224 00:10:40.774844 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:40 crc kubenswrapper[5122]: I0224 00:10:40.774871 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:40 crc kubenswrapper[5122]: E0224 00:10:40.774882 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gwpx2" podUID="ae9b0319-d6e5-4434-9036-346a520931c8" Feb 24 00:10:40 crc kubenswrapper[5122]: E0224 00:10:40.775005 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 24 00:10:40 crc kubenswrapper[5122]: E0224 00:10:40.775336 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 24 00:10:41 crc kubenswrapper[5122]: I0224 00:10:41.774788 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:41 crc kubenswrapper[5122]: E0224 00:10:41.775221 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.126448 5122 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.126575 5122 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.173483 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rdpqq"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.218415 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.218699 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.228129 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.228614 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.228827 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.228998 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.229310 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.229506 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.229782 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.232759 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-gcvhv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.234287 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.235197 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.235257 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.248327 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29531520-qpcf6"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.249443 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.258308 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.260463 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.270019 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.270340 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.271729 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.272378 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.272730 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.275909 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.276180 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.278657 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.278802 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.279181 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.279387 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.279676 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.279408 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.280061 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.280497 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.282446 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-t7d67"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.283051 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.283242 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.283396 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.284322 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.284462 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.284634 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.285247 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.289069 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.289922 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.291854 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.291919 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-image-import-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.291951 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6p2\" (UniqueName: \"kubernetes.io/projected/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-kube-api-access-ck6p2\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.291996 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit-dir\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292030 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-client\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292056 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292082 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-encryption-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292236 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292317 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292368 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-serving-cert\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.292401 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.310516 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4frxv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.311569 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.330367 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-hcf48"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.330574 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.331493 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.333630 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.336391 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-zn588"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.336639 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.341999 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.342471 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.342677 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.342883 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.342955 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.343081 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.343735 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.343853 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.343945 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.342910 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.344419 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.345226 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-m6v2b"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.345432 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.346141 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.354803 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.355149 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.356014 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.361685 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7fw77"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.364115 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.364338 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.364426 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.364649 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.365472 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366389 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366410 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366508 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366537 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366401 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366613 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366661 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.366672 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.382649 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.382834 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.382997 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.383005 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.383558 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.383639 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.383953 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.385779 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386013 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386194 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386303 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386570 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386673 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.386752 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.388374 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.388649 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395305 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395657 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-available-featuregates\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395741 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395818 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit-dir\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395887 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-serving-cert\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.395965 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396064 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9gv\" (UniqueName: \"kubernetes.io/projected/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-kube-api-access-nx9gv\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396145 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396218 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftfp\" (UniqueName: \"kubernetes.io/projected/d7cba214-7e4b-4e74-9422-9953c7d66961-kube-api-access-lftfp\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396361 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396429 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw568\" (UniqueName: \"kubernetes.io/projected/4946f9dc-ac73-42d3-b0da-8509903497e0-kube-api-access-tw568\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396495 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396621 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit-dir\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396499 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396908 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk2np\" (UniqueName: \"kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396942 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.396970 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.397003 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-config\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.397028 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ks4\" (UniqueName: \"kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.397062 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.397106 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.400680 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.400977 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401139 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401285 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401135 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401398 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401417 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401442 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-client\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401459 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401482 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401559 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-trusted-ca\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401634 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401652 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4952ce-381c-46b9-b490-3403aa77106e-serving-cert\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401666 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401683 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401701 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphr7\" (UniqueName: \"kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401708 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401717 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-encryption-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401733 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401748 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401781 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401810 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401833 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-serving-cert\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401844 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401849 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401875 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401892 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cba214-7e4b-4e74-9422-9953c7d66961-serving-cert\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401911 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401929 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwkv\" (UniqueName: \"kubernetes.io/projected/0f4952ce-381c-46b9-b490-3403aa77106e-kube-api-access-6rwkv\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401956 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-image-import-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401971 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-config\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401988 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.401996 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402009 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402028 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4946f9dc-ac73-42d3-b0da-8509903497e0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402059 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6p2\" (UniqueName: \"kubernetes.io/projected/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-kube-api-access-ck6p2\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402102 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402118 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcbn\" (UniqueName: \"kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402140 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402154 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402176 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402440 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402333 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402628 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402279 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402770 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402387 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.402955 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.403415 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.403491 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.403851 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.407002 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.408260 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.408602 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.408954 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.412268 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.412781 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.412993 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.413271 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-serving-cert\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.413328 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.413419 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.413837 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.414137 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.414268 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.414218 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.414447 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.420326 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-image-import-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.421597 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-audit\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.421834 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.424193 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-encryption-config\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.425643 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.427574 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.427732 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.429074 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6p2\" (UniqueName: \"kubernetes.io/projected/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-kube-api-access-ck6p2\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.429117 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02-etcd-client\") pod \"apiserver-9ddfb9f55-rdpqq\" (UID: \"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.436152 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.436291 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.436494 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.436905 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.438212 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.441266 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.442130 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.446080 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.446554 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.452865 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.461776 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.462025 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.465260 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.465540 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.468675 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2k6m5"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.469147 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.473888 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.474456 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.476200 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.476335 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.479316 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.479485 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.480468 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.483080 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.483207 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.491450 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.491676 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.495144 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.495357 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.499467 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.499662 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.501179 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.502997 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503129 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-trusted-ca\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503157 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4952ce-381c-46b9-b490-3403aa77106e-serving-cert\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503220 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503241 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503261 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qphr7\" (UniqueName: \"kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503282 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvc74\" (UniqueName: \"kubernetes.io/projected/37148282-9b0e-4952-8e4d-4da50bbc48f7-kube-api-access-rvc74\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.503324 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.503290362 +0000 UTC m=+125.592744875 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503402 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503434 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503470 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503502 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503530 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503564 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503594 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cba214-7e4b-4e74-9422-9953c7d66961-serving-cert\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503616 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwkv\" (UniqueName: \"kubernetes.io/projected/0f4952ce-381c-46b9-b490-3403aa77106e-kube-api-access-6rwkv\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503651 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-config\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503671 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503693 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503594 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.503711 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4946f9dc-ac73-42d3-b0da-8509903497e0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.504470 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.505462 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-trusted-ca\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.506615 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-dir\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.506696 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4952ce-381c-46b9-b490-3403aa77106e-serving-cert\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.506705 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.506742 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.506812 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-skcbn\" (UniqueName: \"kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.506836 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.506851 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.506862 5122 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.506917 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.506902873 +0000 UTC m=+125.596357386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.507447 5122 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.509332 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.509421 5122 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.509442 5122 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.509694 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.509862 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.507477919 +0000 UTC m=+125.596932432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510045 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510299 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510491 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510630 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510737 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510779 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-serving-cert\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510969 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511048 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-available-featuregates\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511116 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511176 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-client\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511214 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-encryption-config\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511280 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-serving-cert\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511313 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511344 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-policies\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511381 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9gv\" (UniqueName: \"kubernetes.io/projected/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-kube-api-access-nx9gv\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511410 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511436 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lftfp\" (UniqueName: \"kubernetes.io/projected/d7cba214-7e4b-4e74-9422-9953c7d66961-kube-api-access-lftfp\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511467 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511499 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tw568\" (UniqueName: \"kubernetes.io/projected/4946f9dc-ac73-42d3-b0da-8509903497e0-kube-api-access-tw568\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511539 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511580 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2np\" (UniqueName: \"kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.511805 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.511646936 +0000 UTC m=+125.601101469 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511876 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.511942 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512006 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-config\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512064 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ks4\" (UniqueName: \"kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512144 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512181 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-config\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512203 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512238 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512318 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.512813 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-available-featuregates\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.513974 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.514234 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.514391 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-serving-ca\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.514549 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.514724 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510727 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-dm88r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.515905 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.515357 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.516147 5122 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.515918 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.516292 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.516614 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.516588044 +0000 UTC m=+125.606042557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.516634 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.514726 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.516862 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.516987 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.510577 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.517309 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.517624 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.518020 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.518485 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.518681 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.519367 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4952ce-381c-46b9-b490-3403aa77106e-config\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.515240 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cba214-7e4b-4e74-9422-9953c7d66961-serving-cert\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.520596 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.520779 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7cba214-7e4b-4e74-9422-9953c7d66961-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.521184 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4946f9dc-ac73-42d3-b0da-8509903497e0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.522024 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.524008 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.525635 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.525817 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.525818 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.526188 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.526810 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-serving-cert\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.532543 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.541125 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.542390 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.542585 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.547848 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-xtm2m"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.548586 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.552146 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.555406 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.555616 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.560449 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.562758 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.562931 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.572142 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.572356 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.576409 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4vqwn"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.576604 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.581401 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.581496 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.581625 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588356 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-qpcf6"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588383 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588396 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588406 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rdpqq"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588417 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46xbn"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.588628 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.592130 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jfb2"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.592267 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596570 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4frxv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596605 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596619 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596634 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7fw77"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596646 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-zn588"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596657 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596667 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596679 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-t7d67"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596690 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596701 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-gcvhv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596716 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596768 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596825 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596719 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596842 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596855 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2k6m5"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596866 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-m6v2b"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596877 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596888 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596897 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596908 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596917 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.596932 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tvgxr"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.600264 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.600643 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-267zx"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.601091 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.609888 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-69cfh"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.610100 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613222 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613253 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613265 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613277 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613288 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dm88r"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613299 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613308 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613317 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613326 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jfb2"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613336 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvgxr"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613345 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613355 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4vqwn"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613365 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613374 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613384 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-267zx"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.613532 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616498 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-serving-cert\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616535 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-client\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616552 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-encryption-config\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616571 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-policies\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616613 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-serving-ca\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616655 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvc74\" (UniqueName: \"kubernetes.io/projected/37148282-9b0e-4952-8e4d-4da50bbc48f7-kube-api-access-rvc74\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616704 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616742 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616771 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-dir\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.616850 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-dir\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.617174 5122 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.617442 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.617477 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs podName:ae9b0319-d6e5-4434-9036-346a520931c8 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.617450966 +0000 UTC m=+125.706905499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs") pod "network-metrics-daemon-gwpx2" (UID: "ae9b0319-d6e5-4434-9036-346a520931c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.617512 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-serving-ca\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.617551 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37148282-9b0e-4952-8e4d-4da50bbc48f7-audit-policies\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.619467 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-encryption-config\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.619688 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-etcd-client\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.620142 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37148282-9b0e-4952-8e4d-4da50bbc48f7-serving-cert\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.641535 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.660770 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.680779 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.717553 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.717698 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.717769 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbs6\" (UniqueName: \"kubernetes.io/projected/0f2d24c5-cbfa-410d-8105-d67830202ff1-kube-api-access-jvbs6\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718035 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-service-ca\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718116 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-oauth-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718158 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4d739bc-bd88-426e-8683-d34b790d5d2f-machine-approver-tls\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718209 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-images\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718235 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4bp\" (UniqueName: \"kubernetes.io/projected/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-kube-api-access-kh4bp\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718286 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a476700-74f6-4579-b7f8-449e3c4ce746-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718329 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-oauth-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718377 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718449 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d407b1a-a260-41a6-a68d-b00b993fb77a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718475 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d407b1a-a260-41a6-a68d-b00b993fb77a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718504 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718663 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njs9r\" (UniqueName: \"kubernetes.io/projected/6d407b1a-a260-41a6-a68d-b00b993fb77a-kube-api-access-njs9r\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718768 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718791 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vb56\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.718823 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.719430 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-config\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.719900 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8z9\" (UniqueName: \"kubernetes.io/projected/8a476700-74f6-4579-b7f8-449e3c4ce746-kube-api-access-vx8z9\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720152 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720231 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-trusted-ca-bundle\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720258 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720305 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a476700-74f6-4579-b7f8-449e3c4ce746-config\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720335 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk65r\" (UniqueName: \"kubernetes.io/projected/a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5-kube-api-access-rk65r\") pod \"downloads-747b44746d-m6v2b\" (UID: \"a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5\") " pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720385 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-auth-proxy-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720410 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720504 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhd7\" (UniqueName: \"kubernetes.io/projected/c4d739bc-bd88-426e-8683-d34b790d5d2f-kube-api-access-bfhd7\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720668 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720724 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.720751 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d407b1a-a260-41a6-a68d-b00b993fb77a-config\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.722425 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.222409882 +0000 UTC m=+110.311864395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.724035 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.731806 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rdpqq"] Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.741203 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.761515 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.774422 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.774854 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.775055 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.781701 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.801815 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.821497 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822135 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.822419 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.322387029 +0000 UTC m=+110.411841532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822576 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822620 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwq6\" (UniqueName: \"kubernetes.io/projected/9cc08205-f0b1-47dc-a44c-da4611ff6b88-kube-api-access-gkwq6\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822643 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-config\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822661 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822695 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72pcs\" (UniqueName: \"kubernetes.io/projected/865b2fc7-0d57-48d7-a665-fa9a93257469-kube-api-access-72pcs\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822713 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822739 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca96a93-d382-46a6-81cf-59840b39671e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822763 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7g2z\" (UniqueName: \"kubernetes.io/projected/506b0459-7f41-4507-8377-f1fc79c51113-kube-api-access-d7g2z\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822822 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-trusted-ca-bundle\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822846 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822870 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822898 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822917 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-config\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822937 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rk65r\" (UniqueName: \"kubernetes.io/projected/a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5-kube-api-access-rk65r\") pod \"downloads-747b44746d-m6v2b\" (UID: \"a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5\") " pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822963 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvn9t\" (UniqueName: \"kubernetes.io/projected/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-kube-api-access-rvn9t\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.822988 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823030 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhd7\" (UniqueName: \"kubernetes.io/projected/c4d739bc-bd88-426e-8683-d34b790d5d2f-kube-api-access-bfhd7\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823056 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f4ee5a2-9ca3-4990-896b-c81fe77da971-tmp-dir\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823077 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgv8r\" (UniqueName: \"kubernetes.io/projected/4e08c688-1af4-4f0a-9cca-26dbe17bb618-kube-api-access-sgv8r\") pod \"migrator-866fcbc849-spmnw\" (UID: \"4e08c688-1af4-4f0a-9cca-26dbe17bb618\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823114 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-mountpoint-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823134 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphkq\" (UniqueName: \"kubernetes.io/projected/51ccf528-5b90-43e8-9e17-d283a0b1723f-kube-api-access-rphkq\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823172 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823192 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggjb\" (UniqueName: \"kubernetes.io/projected/fca96a93-d382-46a6-81cf-59840b39671e-kube-api-access-pggjb\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823233 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-plugins-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823261 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823596 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fntv\" (UniqueName: \"kubernetes.io/projected/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-kube-api-access-6fntv\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823754 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/506b0459-7f41-4507-8377-f1fc79c51113-tmpfs\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823794 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwlj\" (UniqueName: \"kubernetes.io/projected/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-kube-api-access-knwlj\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823836 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823865 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.823954 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824001 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cc08205-f0b1-47dc-a44c-da4611ff6b88-tmp-dir\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824105 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c87ba7-0787-425a-ab2c-5c5069cc14d3-config\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824144 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824191 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824212 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824258 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-service-ca\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824292 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824312 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824340 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4bp\" (UniqueName: \"kubernetes.io/projected/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-kube-api-access-kh4bp\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824388 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-csi-data-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824413 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-oauth-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824435 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-images\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824478 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-oauth-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824508 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824557 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-trusted-ca-bundle\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824641 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824725 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d407b1a-a260-41a6-a68d-b00b993fb77a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824762 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824809 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-images\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824849 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.824988 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-service-ca\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825150 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-oauth-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825260 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-socket-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825340 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825458 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-tmpfs\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825495 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6a33f4-db81-4724-8502-c62734961fc8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825541 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825571 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7qp\" (UniqueName: \"kubernetes.io/projected/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-kube-api-access-zv7qp\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825605 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cd2g\" (UniqueName: \"kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825642 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-images\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825734 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrbn\" (UniqueName: \"kubernetes.io/projected/fc07aacc-6c08-4ef3-a058-b6a823315eec-kube-api-access-4rrbn\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825802 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8m6c\" (UniqueName: \"kubernetes.io/projected/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-kube-api-access-f8m6c\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825840 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825862 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825881 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmvq\" (UniqueName: \"kubernetes.io/projected/f0657a36-859b-4454-8940-c1b68b1161c6-kube-api-access-mcmvq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825901 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzwl\" (UniqueName: \"kubernetes.io/projected/33257dc3-785e-4b1e-9087-01a0cb290b5c-kube-api-access-wvzwl\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.825938 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea6a33f4-db81-4724-8502-c62734961fc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.825961 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.325942568 +0000 UTC m=+110.415397091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826077 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27ed85a1-debc-420c-8603-b108f7957a7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826145 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826269 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8z9\" (UniqueName: \"kubernetes.io/projected/8a476700-74f6-4579-b7f8-449e3c4ce746-kube-api-access-vx8z9\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826315 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87ba7-0787-425a-ab2c-5c5069cc14d3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826344 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ed85a1-debc-420c-8603-b108f7957a7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826388 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzc9v\" (UniqueName: \"kubernetes.io/projected/e8537760-49d8-4e35-9333-65c360424b0d-kube-api-access-bzc9v\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826490 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9cc08205-f0b1-47dc-a44c-da4611ff6b88-metrics-tls\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826610 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca96a93-d382-46a6-81cf-59840b39671e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826665 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826694 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0657a36-859b-4454-8940-c1b68b1161c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.826732 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-registration-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827513 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827578 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-597tv\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-kube-api-access-597tv\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827677 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827684 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a476700-74f6-4579-b7f8-449e3c4ce746-config\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827749 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4cn\" (UniqueName: \"kubernetes.io/projected/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-kube-api-access-cj4cn\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.827781 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.829495 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.829584 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d407b1a-a260-41a6-a68d-b00b993fb77a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.829665 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa7aa06-a13d-414d-8164-544e84019bab-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.829710 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-auth-proxy-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.829740 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830128 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-serving-cert\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830177 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830187 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830227 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4d739bc-bd88-426e-8683-d34b790d5d2f-auth-proxy-config\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830236 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa7aa06-a13d-414d-8164-544e84019bab-config\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830278 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-key\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830315 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/51ccf528-5b90-43e8-9e17-d283a0b1723f-tmp-dir\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830420 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-tmpfs\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830449 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0aa7aa06-a13d-414d-8164-544e84019bab-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830482 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfnj\" (UniqueName: \"kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830523 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830551 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d407b1a-a260-41a6-a68d-b00b993fb77a-config\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830654 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvbs6\" (UniqueName: \"kubernetes.io/projected/0f2d24c5-cbfa-410d-8105-d67830202ff1-kube-api-access-jvbs6\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830710 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-client\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830737 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830771 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5h5x\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-kube-api-access-x5h5x\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830793 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.830832 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a476700-74f6-4579-b7f8-449e3c4ce746-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831133 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831206 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4d739bc-bd88-426e-8683-d34b790d5d2f-machine-approver-tls\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831239 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzw5\" (UniqueName: \"kubernetes.io/projected/2f4ee5a2-9ca3-4990-896b-c81fe77da971-kube-api-access-ktzw5\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831265 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831290 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea6a33f4-db81-4724-8502-c62734961fc8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831317 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flnm\" (UniqueName: \"kubernetes.io/projected/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-kube-api-access-5flnm\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831342 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831371 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa7aa06-a13d-414d-8164-544e84019bab-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831407 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89a777c8-8c85-45e5-b60b-6abb996b25f8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831432 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6a33f4-db81-4724-8502-c62734961fc8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831445 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d407b1a-a260-41a6-a68d-b00b993fb77a-config\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831476 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d407b1a-a260-41a6-a68d-b00b993fb77a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831718 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d407b1a-a260-41a6-a68d-b00b993fb77a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831762 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831802 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57f5\" (UniqueName: \"kubernetes.io/projected/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-kube-api-access-h57f5\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831861 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njs9r\" (UniqueName: \"kubernetes.io/projected/6d407b1a-a260-41a6-a68d-b00b993fb77a-kube-api-access-njs9r\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831896 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831922 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831932 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a476700-74f6-4579-b7f8-449e3c4ce746-config\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.831962 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832010 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832041 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832065 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlr7\" (UniqueName: \"kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832129 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4vb56\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832165 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832188 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/865b2fc7-0d57-48d7-a665-fa9a93257469-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832306 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832309 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832338 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-config\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832370 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-serving-cert\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832482 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832508 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/38c87ba7-0787-425a-ab2c-5c5069cc14d3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832565 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832612 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89a777c8-8c85-45e5-b60b-6abb996b25f8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832635 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-serving-cert\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832656 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87ba7-0787-425a-ab2c-5c5069cc14d3-serving-cert\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832694 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832782 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0f2d24c5-cbfa-410d-8105-d67830202ff1-console-oauth-config\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.832941 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-config\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.833463 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.833779 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4d739bc-bd88-426e-8683-d34b790d5d2f-machine-approver-tls\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.833826 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.833840 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a476700-74f6-4579-b7f8-449e3c4ce746-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.840838 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.861271 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.886546 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.900711 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.920870 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934240 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.934414 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.434391442 +0000 UTC m=+110.523845955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934484 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934520 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-images\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934552 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934596 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-socket-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934630 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-tmpfs\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934820 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6a33f4-db81-4724-8502-c62734961fc8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934871 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-socket-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.934881 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935000 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7qp\" (UniqueName: \"kubernetes.io/projected/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-kube-api-access-zv7qp\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935038 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8cd2g\" (UniqueName: \"kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935061 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4rrbn\" (UniqueName: \"kubernetes.io/projected/fc07aacc-6c08-4ef3-a058-b6a823315eec-kube-api-access-4rrbn\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935107 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8m6c\" (UniqueName: \"kubernetes.io/projected/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-kube-api-access-f8m6c\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935128 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935148 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935180 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmvq\" (UniqueName: \"kubernetes.io/projected/f0657a36-859b-4454-8940-c1b68b1161c6-kube-api-access-mcmvq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935202 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvzwl\" (UniqueName: \"kubernetes.io/projected/33257dc3-785e-4b1e-9087-01a0cb290b5c-kube-api-access-wvzwl\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935209 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-tmpfs\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935226 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea6a33f4-db81-4724-8502-c62734961fc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935265 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27ed85a1-debc-420c-8603-b108f7957a7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935293 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-images\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935298 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935374 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87ba7-0787-425a-ab2c-5c5069cc14d3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935397 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ed85a1-debc-420c-8603-b108f7957a7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935430 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzc9v\" (UniqueName: \"kubernetes.io/projected/e8537760-49d8-4e35-9333-65c360424b0d-kube-api-access-bzc9v\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935455 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9cc08205-f0b1-47dc-a44c-da4611ff6b88-metrics-tls\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935481 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca96a93-d382-46a6-81cf-59840b39671e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935532 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0657a36-859b-4454-8940-c1b68b1161c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935547 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea6a33f4-db81-4724-8502-c62734961fc8-config\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935566 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-registration-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935597 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935621 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-597tv\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-kube-api-access-597tv\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935630 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-registration-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935648 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cj4cn\" (UniqueName: \"kubernetes.io/projected/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-kube-api-access-cj4cn\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935665 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935686 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa7aa06-a13d-414d-8164-544e84019bab-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935710 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935729 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa7aa06-a13d-414d-8164-544e84019bab-config\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935747 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-key\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935762 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/51ccf528-5b90-43e8-9e17-d283a0b1723f-tmp-dir\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935781 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-tmpfs\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935797 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0aa7aa06-a13d-414d-8164-544e84019bab-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935817 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwfnj\" (UniqueName: \"kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935841 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-client\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935860 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935880 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5h5x\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-kube-api-access-x5h5x\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935903 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935924 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktzw5\" (UniqueName: \"kubernetes.io/projected/2f4ee5a2-9ca3-4990-896b-c81fe77da971-kube-api-access-ktzw5\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935940 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935957 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea6a33f4-db81-4724-8502-c62734961fc8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935973 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5flnm\" (UniqueName: \"kubernetes.io/projected/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-kube-api-access-5flnm\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.935988 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936005 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa7aa06-a13d-414d-8164-544e84019bab-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936025 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89a777c8-8c85-45e5-b60b-6abb996b25f8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936042 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6a33f4-db81-4724-8502-c62734961fc8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936067 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h57f5\" (UniqueName: \"kubernetes.io/projected/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-kube-api-access-h57f5\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936091 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936124 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936141 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936181 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936205 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936224 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlr7\" (UniqueName: \"kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936240 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936256 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/865b2fc7-0d57-48d7-a665-fa9a93257469-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936279 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-serving-cert\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936278 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0aa7aa06-a13d-414d-8164-544e84019bab-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936397 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca96a93-d382-46a6-81cf-59840b39671e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936479 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936557 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/38c87ba7-0787-425a-ab2c-5c5069cc14d3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936610 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936652 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89a777c8-8c85-45e5-b60b-6abb996b25f8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936679 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-serving-cert\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936705 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87ba7-0787-425a-ab2c-5c5069cc14d3-serving-cert\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936730 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936792 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/51ccf528-5b90-43e8-9e17-d283a0b1723f-tmp-dir\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936831 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936857 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwq6\" (UniqueName: \"kubernetes.io/projected/9cc08205-f0b1-47dc-a44c-da4611ff6b88-kube-api-access-gkwq6\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936876 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-config\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936894 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936926 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72pcs\" (UniqueName: \"kubernetes.io/projected/865b2fc7-0d57-48d7-a665-fa9a93257469-kube-api-access-72pcs\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936943 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936960 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27ed85a1-debc-420c-8603-b108f7957a7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936966 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.936965 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca96a93-d382-46a6-81cf-59840b39671e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937109 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89a777c8-8c85-45e5-b60b-6abb996b25f8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937279 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7g2z\" (UniqueName: \"kubernetes.io/projected/506b0459-7f41-4507-8377-f1fc79c51113-kube-api-access-d7g2z\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937329 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937372 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937394 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-config\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937416 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvn9t\" (UniqueName: \"kubernetes.io/projected/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-kube-api-access-rvn9t\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937417 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea6a33f4-db81-4724-8502-c62734961fc8-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937432 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937484 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f4ee5a2-9ca3-4990-896b-c81fe77da971-tmp-dir\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937497 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937505 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgv8r\" (UniqueName: \"kubernetes.io/projected/4e08c688-1af4-4f0a-9cca-26dbe17bb618-kube-api-access-sgv8r\") pod \"migrator-866fcbc849-spmnw\" (UID: \"4e08c688-1af4-4f0a-9cca-26dbe17bb618\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937527 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-mountpoint-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937548 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rphkq\" (UniqueName: \"kubernetes.io/projected/51ccf528-5b90-43e8-9e17-d283a0b1723f-kube-api-access-rphkq\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937594 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-tmpfs\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937595 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-mountpoint-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937657 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937709 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pggjb\" (UniqueName: \"kubernetes.io/projected/fca96a93-d382-46a6-81cf-59840b39671e-kube-api-access-pggjb\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937792 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-plugins-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937819 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937908 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-plugins-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937914 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937910 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6fntv\" (UniqueName: \"kubernetes.io/projected/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-kube-api-access-6fntv\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937967 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/506b0459-7f41-4507-8377-f1fc79c51113-tmpfs\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.937998 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knwlj\" (UniqueName: \"kubernetes.io/projected/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-kube-api-access-knwlj\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938033 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938059 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: E0224 00:10:42.938161 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.438089106 +0000 UTC m=+110.527543689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938190 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cc08205-f0b1-47dc-a44c-da4611ff6b88-tmp-dir\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938230 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c87ba7-0787-425a-ab2c-5c5069cc14d3-config\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938262 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938282 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938307 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938323 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938343 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-csi-data-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938458 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/506b0459-7f41-4507-8377-f1fc79c51113-tmpfs\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938506 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-csi-data-dir\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938532 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27ed85a1-debc-420c-8603-b108f7957a7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938596 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9cc08205-f0b1-47dc-a44c-da4611ff6b88-tmp-dir\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938686 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/38c87ba7-0787-425a-ab2c-5c5069cc14d3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938794 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f4ee5a2-9ca3-4990-896b-c81fe77da971-tmp-dir\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938900 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.938980 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.940466 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.940865 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ed85a1-debc-420c-8603-b108f7957a7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.940911 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6a33f4-db81-4724-8502-c62734961fc8-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.942881 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89a777c8-8c85-45e5-b60b-6abb996b25f8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.943122 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c87ba7-0787-425a-ab2c-5c5069cc14d3-serving-cert\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.943550 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.950020 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c87ba7-0787-425a-ab2c-5c5069cc14d3-config\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.963405 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 24 00:10:42 crc kubenswrapper[5122]: I0224 00:10:42.981082 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.001264 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.020551 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.029404 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9cc08205-f0b1-47dc-a44c-da4611ff6b88-metrics-tls\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.039246 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.039430 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.539411871 +0000 UTC m=+110.628866384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.039679 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.040068 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.540057479 +0000 UTC m=+110.629511992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.040445 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.060913 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.070417 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.089259 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.099390 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.101388 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.121598 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.140486 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.140874 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.141112 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.641055604 +0000 UTC m=+110.730510147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.141557 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.141893 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.641875807 +0000 UTC m=+110.731330360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.161001 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.171271 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa7aa06-a13d-414d-8164-544e84019bab-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.181246 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.188820 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa7aa06-a13d-414d-8164-544e84019bab-config\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.200781 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.220873 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.240456 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.242793 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.243028 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.743008106 +0000 UTC m=+110.832462629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.243408 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.243842 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.743815259 +0000 UTC m=+110.833269812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.262713 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.273379 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca96a93-d382-46a6-81cf-59840b39671e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.281053 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.286414 5122 generic.go:358] "Generic (PLEG): container finished" podID="2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02" containerID="0d01451aefabe37bbc82b66216621538d3de68f15b7d62163615394c833276e2" exitCode=0 Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.286484 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" event={"ID":"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02","Type":"ContainerDied","Data":"0d01451aefabe37bbc82b66216621538d3de68f15b7d62163615394c833276e2"} Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.286548 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" event={"ID":"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02","Type":"ContainerStarted","Data":"b0bbbf604d2b269fcafac47d44d71f9eacd3b4922e243cb1eab35d875d158a17"} Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.301186 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.321514 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.332319 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-serving-cert\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.341921 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.345491 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.845389651 +0000 UTC m=+110.934844204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.345577 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.355048 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-client\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.360191 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.361425 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.361721 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.861709357 +0000 UTC m=+110.951163870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.410226 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.411287 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.415960 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.419364 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-config\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.421260 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.429501 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/51ccf528-5b90-43e8-9e17-d283a0b1723f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.441749 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.460359 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.461020 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.461248 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.961221351 +0000 UTC m=+111.050675894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.462370 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.462668 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:43.962658801 +0000 UTC m=+111.052113314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.481429 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.492221 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/865b2fc7-0d57-48d7-a665-fa9a93257469-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.499684 5122 request.go:752] "Waited before sending request" delay="1.003191355s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.501570 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.520895 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.530802 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0657a36-859b-4454-8940-c1b68b1161c6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.540627 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.560621 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.564333 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.564480 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.064460479 +0000 UTC m=+111.153915002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.564760 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.565102 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.065090727 +0000 UTC m=+111.154545240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.572621 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-serving-cert\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.582406 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.602003 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.622889 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.640507 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.649092 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-config\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.665883 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.666772 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.166755861 +0000 UTC m=+111.256210374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.681596 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-skcbn\" (UniqueName: \"kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn\") pod \"image-pruner-29531520-qpcf6\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.703934 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphr7\" (UniqueName: \"kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7\") pod \"controller-manager-65b6cccf98-lxjqf\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.719622 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwkv\" (UniqueName: \"kubernetes.io/projected/0f4952ce-381c-46b9-b490-3403aa77106e-kube-api-access-6rwkv\") pod \"console-operator-67c89758df-t7d67\" (UID: \"0f4952ce-381c-46b9-b490-3403aa77106e\") " pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.741051 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.750398 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2np\" (UniqueName: \"kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np\") pod \"oauth-openshift-66458b6674-jnnfl\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.762876 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.768665 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.769116 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.269095594 +0000 UTC m=+111.358550117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.774630 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.782066 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.817023 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw568\" (UniqueName: \"kubernetes.io/projected/4946f9dc-ac73-42d3-b0da-8509903497e0-kube-api-access-tw568\") pod \"cluster-samples-operator-6b564684c8-vtw97\" (UID: \"4946f9dc-ac73-42d3-b0da-8509903497e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.825779 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.836176 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ks4\" (UniqueName: \"kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4\") pod \"route-controller-manager-776cdc94d6-b5hst\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.856906 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9gv\" (UniqueName: \"kubernetes.io/projected/47d73a9e-a36f-42a0-a81b-f3e0c51259e8-kube-api-access-nx9gv\") pod \"openshift-config-operator-5777786469-gcvhv\" (UID: \"47d73a9e-a36f-42a0-a81b-f3e0c51259e8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.857239 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.870812 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.871603 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.371580551 +0000 UTC m=+111.461035074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.880344 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.882526 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftfp\" (UniqueName: \"kubernetes.io/projected/d7cba214-7e4b-4e74-9422-9953c7d66961-kube-api-access-lftfp\") pod \"authentication-operator-7f5c659b84-vbjdh\" (UID: \"d7cba214-7e4b-4e74-9422-9953c7d66961\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.890870 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.892833 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-key\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.900769 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.920898 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.934829 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.934937 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume podName:91922081-9786-47ef-ad37-7d1092f63918 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.434913233 +0000 UTC m=+111.524367746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume") pod "collect-profiles-29531520-j8d8q" (UID: "91922081-9786-47ef-ad37-7d1092f63918") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.935978 5122 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.936020 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls podName:2f4ee5a2-9ca3-4990-896b-c81fe77da971 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.436010924 +0000 UTC m=+111.525465437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls") pod "dns-default-267zx" (UID: "2f4ee5a2-9ca3-4990-896b-c81fe77da971") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.936107 5122 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.936161 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs podName:98c954e4-8a6f-4f90-a365-c781ba1eb8d9 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.436153318 +0000 UTC m=+111.525607831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs") pod "multus-admission-controller-69db94689b-4vqwn" (UID: "98c954e4-8a6f-4f90-a365-c781ba1eb8d9") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.936179 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.936202 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert podName:10e3bdb7-6f23-4553-8536-bf73e0b2a45c nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.436196529 +0000 UTC m=+111.525651042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert") pod "packageserver-7d4fc7d867-q8fpc" (UID: "10e3bdb7-6f23-4553-8536-bf73e0b2a45c") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937274 5122 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937504 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token podName:33257dc3-785e-4b1e-9087-01a0cb290b5c nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437467765 +0000 UTC m=+111.526922288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token") pod "machine-config-server-69cfh" (UID: "33257dc3-785e-4b1e-9087-01a0cb290b5c") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937535 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937597 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert podName:506b0459-7f41-4507-8377-f1fc79c51113 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437558457 +0000 UTC m=+111.527012970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert") pod "olm-operator-5cdf44d969-jq4c4" (UID: "506b0459-7f41-4507-8377-f1fc79c51113") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937622 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937644 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert podName:10e3bdb7-6f23-4553-8536-bf73e0b2a45c nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437638569 +0000 UTC m=+111.527093072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert") pod "packageserver-7d4fc7d867-q8fpc" (UID: "10e3bdb7-6f23-4553-8536-bf73e0b2a45c") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937692 5122 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937718 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config podName:e8537760-49d8-4e35-9333-65c360424b0d nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437711341 +0000 UTC m=+111.527165854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config") pod "kube-storage-version-migrator-operator-565b79b866-4fklm" (UID: "e8537760-49d8-4e35-9333-65c360424b0d") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937758 5122 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937783 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate podName:fc07aacc-6c08-4ef3-a058-b6a823315eec nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437777153 +0000 UTC m=+111.527231666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate") pod "router-default-68cf44c8b8-xtm2m" (UID: "fc07aacc-6c08-4ef3-a058-b6a823315eec") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937798 5122 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937842 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist podName:b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437835565 +0000 UTC m=+111.527290078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-46xbn" (UID: "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937861 5122 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937881 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle podName:fc07aacc-6c08-4ef3-a058-b6a823315eec nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437875896 +0000 UTC m=+111.527330409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle") pod "router-default-68cf44c8b8-xtm2m" (UID: "fc07aacc-6c08-4ef3-a058-b6a823315eec") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937928 5122 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937951 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle podName:6874d9ce-94e3-4cb2-9741-681f8ea50ec1 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.437944608 +0000 UTC m=+111.527399121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle") pod "service-ca-74545575db-dm88r" (UID: "6874d9ce-94e3-4cb2-9741-681f8ea50ec1") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.937993 5122 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938018 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth podName:fc07aacc-6c08-4ef3-a058-b6a823315eec nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.43801242 +0000 UTC m=+111.527466933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth") pod "router-default-68cf44c8b8-xtm2m" (UID: "fc07aacc-6c08-4ef3-a058-b6a823315eec") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938047 5122 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938101 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume podName:91922081-9786-47ef-ad37-7d1092f63918 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438095062 +0000 UTC m=+111.527549575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume") pod "collect-profiles-29531520-j8d8q" (UID: "91922081-9786-47ef-ad37-7d1092f63918") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938116 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938156 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert podName:e5ff5c4f-19af-40c2-b4dc-140d9e75bf33 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438148614 +0000 UTC m=+111.527603127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert") pod "catalog-operator-75ff9f647d-2jgbb" (UID: "e5ff5c4f-19af-40c2-b4dc-140d9e75bf33") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938179 5122 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938205 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs podName:33257dc3-785e-4b1e-9087-01a0cb290b5c nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438198495 +0000 UTC m=+111.527653008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs") pod "machine-config-server-69cfh" (UID: "33257dc3-785e-4b1e-9087-01a0cb290b5c") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938225 5122 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938244 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume podName:2f4ee5a2-9ca3-4990-896b-c81fe77da971 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438239506 +0000 UTC m=+111.527694019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume") pod "dns-default-267zx" (UID: "2f4ee5a2-9ca3-4990-896b-c81fe77da971") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938257 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938274 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert podName:e5ff5c4f-19af-40c2-b4dc-140d9e75bf33 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438269857 +0000 UTC m=+111.527724370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert") pod "catalog-operator-75ff9f647d-2jgbb" (UID: "e5ff5c4f-19af-40c2-b4dc-140d9e75bf33") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938295 5122 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938321 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs podName:fc07aacc-6c08-4ef3-a058-b6a823315eec nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438314308 +0000 UTC m=+111.527768821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs") pod "router-default-68cf44c8b8-xtm2m" (UID: "fc07aacc-6c08-4ef3-a058-b6a823315eec") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938882 5122 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938910 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert podName:506b0459-7f41-4507-8377-f1fc79c51113 nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438903875 +0000 UTC m=+111.528358388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert") pod "olm-operator-5cdf44d969-jq4c4" (UID: "506b0459-7f41-4507-8377-f1fc79c51113") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938928 5122 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938952 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert podName:9f8d90ce-c290-4192-b7e1-0ca7ce254dbf nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438946996 +0000 UTC m=+111.528401509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert") pod "ingress-canary-tvgxr" (UID: "9f8d90ce-c290-4192-b7e1-0ca7ce254dbf") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938969 5122 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.938989 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert podName:e8537760-49d8-4e35-9333-65c360424b0d nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.438984187 +0000 UTC m=+111.528438700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-4fklm" (UID: "e8537760-49d8-4e35-9333-65c360424b0d") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.941707 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.944728 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.954624 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.966338 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.973248 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:43 crc kubenswrapper[5122]: E0224 00:10:43.973754 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.473742809 +0000 UTC m=+111.563197322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:43 crc kubenswrapper[5122]: I0224 00:10:43.980849 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.000687 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.021245 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.027554 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.041055 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.061385 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.075534 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.076419 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-qpcf6"] Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.077066 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.577039749 +0000 UTC m=+111.666494262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.081309 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.085282 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.101208 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.101218 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.109518 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-t7d67"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.120817 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.140975 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.142082 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.153627 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.161282 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: W0224 00:10:44.177556 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8179910_a8d8_4190_89c7_fe04a9f19e86.slice/crio-c8cf085ac037c3d973c613fd826b92dc1aafcf166a1471346f1ade9750f234b3 WatchSource:0}: Error finding container c8cf085ac037c3d973c613fd826b92dc1aafcf166a1471346f1ade9750f234b3: Status 404 returned error can't find the container with id c8cf085ac037c3d973c613fd826b92dc1aafcf166a1471346f1ade9750f234b3 Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.180860 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.181299 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.681281136 +0000 UTC m=+111.770735649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.181516 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.191744 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.204815 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.221396 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.240884 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.261702 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.280802 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.281432 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.281817 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.781682764 +0000 UTC m=+111.871137277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.284465 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.285611 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.785574893 +0000 UTC m=+111.875029406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.287047 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-gcvhv"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.302550 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.302753 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" event={"ID":"36b3c56e-ec77-4507-a2c4-8556b0239225","Type":"ContainerStarted","Data":"a91e3a982914624b69218491025c2e3940e1f2815b54d5ad50918456bc68a7d2"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.308754 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" event={"ID":"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02","Type":"ContainerStarted","Data":"8763aae65767ef9422d4c35c024f5d1721128076098fac0fcaa14066860623ab"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.308826 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" event={"ID":"2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02","Type":"ContainerStarted","Data":"a2a7283fb6f9d492626d715805274cf42d2c0bccae13e70da4d416fa72e51a5f"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.313616 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" event={"ID":"58f519ba-9b81-416e-8f29-0c84e8607ab1","Type":"ContainerStarted","Data":"4b4995a3c2be72c70d56a395d2d255110ef153b8a9097833488090db647d603f"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.319174 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-t7d67" event={"ID":"0f4952ce-381c-46b9-b490-3403aa77106e","Type":"ContainerStarted","Data":"2ea4f503203d799b77696a1c2fd047b955d8692467e45ab83db060b33d13e542"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.320078 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.320181 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" event={"ID":"e8179910-a8d8-4190-89c7-fe04a9f19e86","Type":"ContainerStarted","Data":"c8cf085ac037c3d973c613fd826b92dc1aafcf166a1471346f1ade9750f234b3"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.320980 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-qpcf6" event={"ID":"5247eba3-d3c0-4892-a371-f5d13f08c178","Type":"ContainerStarted","Data":"c841b6b4eb1b2a2c1d15f94d1e1abb27d2e98163c32330548f59d72b7c8b8dc8"} Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.321036 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.341774 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.360466 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.361726 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh"] Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.381446 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.386281 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.387191 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.887169826 +0000 UTC m=+111.976624339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.401030 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.421266 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.441098 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.461534 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.481613 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.488800 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.488840 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.488871 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489005 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489039 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489096 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489271 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489333 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489369 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489392 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489449 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.489575 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:44.98956314 +0000 UTC m=+112.079017653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489674 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489719 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489791 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489846 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489871 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489918 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489937 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489967 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.489988 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.490004 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.490021 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.490055 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.490664 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.491013 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-signing-cabundle\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.491319 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc07aacc-6c08-4ef3-a058-b6a823315eec-service-ca-bundle\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.492134 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.492702 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8537760-49d8-4e35-9333-65c360424b0d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.496018 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-srv-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.496199 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-webhook-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.496611 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-metrics-certs\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.496685 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.497542 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-webhook-certs\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.497618 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-stats-auth\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.499663 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc07aacc-6c08-4ef3-a058-b6a823315eec-default-certificate\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.500070 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.500140 5122 request.go:752] "Waited before sending request" delay="1.89875852s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.500279 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/506b0459-7f41-4507-8377-f1fc79c51113-profile-collector-cert\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.500509 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-apiservice-cert\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.501709 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.502397 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-srv-cert\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.503955 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8537760-49d8-4e35-9333-65c360424b0d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.516986 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-cert\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.521321 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.540548 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.561285 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.569728 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f4ee5a2-9ca3-4990-896b-c81fe77da971-config-volume\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.581986 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.590797 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.590989 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.090969317 +0000 UTC m=+112.180423830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.591520 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.591908 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.091888463 +0000 UTC m=+112.181342976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.594555 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f4ee5a2-9ca3-4990-896b-c81fe77da971-metrics-tls\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.601164 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.610789 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-certs\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.624019 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.641702 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.651168 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33257dc3-785e-4b1e-9087-01a0cb290b5c-node-bootstrap-token\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.675990 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvc74\" (UniqueName: \"kubernetes.io/projected/37148282-9b0e-4952-8e4d-4da50bbc48f7-kube-api-access-rvc74\") pod \"apiserver-8596bd845d-zn588\" (UID: \"37148282-9b0e-4952-8e4d-4da50bbc48f7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.693279 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.693470 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.193432374 +0000 UTC m=+112.282886887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.693988 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.694351 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.194330929 +0000 UTC m=+112.283785432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.701220 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.722141 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.742145 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.761486 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.795016 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.795645 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.295598522 +0000 UTC m=+112.385053065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.795991 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.796520 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.296508267 +0000 UTC m=+112.385962790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.797091 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhd7\" (UniqueName: \"kubernetes.io/projected/c4d739bc-bd88-426e-8683-d34b790d5d2f-kube-api-access-bfhd7\") pod \"machine-approver-54c688565-hcf48\" (UID: \"c4d739bc-bd88-426e-8683-d34b790d5d2f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.823252 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh4bp\" (UniqueName: \"kubernetes.io/projected/8e88e04e-2e6c-45d3-97fe-d49d5fd9f480-kube-api-access-kh4bp\") pod \"machine-api-operator-755bb95488-4frxv\" (UID: \"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.836300 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.857485 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8z9\" (UniqueName: \"kubernetes.io/projected/8a476700-74f6-4579-b7f8-449e3c4ce746-kube-api-access-vx8z9\") pod \"openshift-apiserver-operator-846cbfc458-btsbr\" (UID: \"8a476700-74f6-4579-b7f8-449e3c4ce746\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.862175 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.868970 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.877746 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.884728 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk65r\" (UniqueName: \"kubernetes.io/projected/a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5-kube-api-access-rk65r\") pod \"downloads-747b44746d-m6v2b\" (UID: \"a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5\") " pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.885041 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.892562 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.896874 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.897324 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.397300307 +0000 UTC m=+112.486754820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.905667 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvbs6\" (UniqueName: \"kubernetes.io/projected/0f2d24c5-cbfa-410d-8105-d67830202ff1-kube-api-access-jvbs6\") pod \"console-64d44f6ddf-7fw77\" (UID: \"0f2d24c5-cbfa-410d-8105-d67830202ff1\") " pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.918743 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njs9r\" (UniqueName: \"kubernetes.io/projected/6d407b1a-a260-41a6-a68d-b00b993fb77a-kube-api-access-njs9r\") pod \"openshift-controller-manager-operator-686468bdd5-79flb\" (UID: \"6d407b1a-a260-41a6-a68d-b00b993fb77a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.938282 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vb56\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.940446 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.995020 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7qp\" (UniqueName: \"kubernetes.io/projected/6874d9ce-94e3-4cb2-9741-681f8ea50ec1-kube-api-access-zv7qp\") pod \"service-ca-74545575db-dm88r\" (UID: \"6874d9ce-94e3-4cb2-9741-681f8ea50ec1\") " pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.998178 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:44 crc kubenswrapper[5122]: E0224 00:10:44.998529 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.498517259 +0000 UTC m=+112.587971772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:44 crc kubenswrapper[5122]: I0224 00:10:44.998743 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rrbn\" (UniqueName: \"kubernetes.io/projected/fc07aacc-6c08-4ef3-a058-b6a823315eec-kube-api-access-4rrbn\") pod \"router-default-68cf44c8b8-xtm2m\" (UID: \"fc07aacc-6c08-4ef3-a058-b6a823315eec\") " pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.002773 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cd2g\" (UniqueName: \"kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g\") pod \"cni-sysctl-allowlist-ds-46xbn\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.031884 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea6a33f4-db81-4724-8502-c62734961fc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-n87c7\" (UID: \"ea6a33f4-db81-4724-8502-c62734961fc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.046071 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8m6c\" (UniqueName: \"kubernetes.io/projected/c515c9f9-2b46-41e2-ae64-abfbafbac0fa-kube-api-access-f8m6c\") pod \"machine-config-operator-67c9d58cbb-dfp46\" (UID: \"c515c9f9-2b46-41e2-ae64-abfbafbac0fa\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.069040 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcmvq\" (UniqueName: \"kubernetes.io/projected/f0657a36-859b-4454-8940-c1b68b1161c6-kube-api-access-mcmvq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2pxbg\" (UID: \"f0657a36-859b-4454-8940-c1b68b1161c6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.087710 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.091025 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dm88r" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.097511 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvzwl\" (UniqueName: \"kubernetes.io/projected/33257dc3-785e-4b1e-9087-01a0cb290b5c-kube-api-access-wvzwl\") pod \"machine-config-server-69cfh\" (UID: \"33257dc3-785e-4b1e-9087-01a0cb290b5c\") " pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.099309 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.100064 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.600045699 +0000 UTC m=+112.689500212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.109620 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.116932 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38c87ba7-0787-425a-ab2c-5c5069cc14d3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tl7gq\" (UID: \"38c87ba7-0787-425a-ab2c-5c5069cc14d3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.141150 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzc9v\" (UniqueName: \"kubernetes.io/projected/e8537760-49d8-4e35-9333-65c360424b0d-kube-api-access-bzc9v\") pod \"kube-storage-version-migrator-operator-565b79b866-4fklm\" (UID: \"e8537760-49d8-4e35-9333-65c360424b0d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.157358 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4frxv"] Feb 24 00:10:45 crc kubenswrapper[5122]: W0224 00:10:45.159431 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc07aacc_6c08_4ef3_a058_b6a823315eec.slice/crio-375e267668099914b782984fb7d3a81635ab9544c87c35385f6ddcf338732186 WatchSource:0}: Error finding container 375e267668099914b782984fb7d3a81635ab9544c87c35385f6ddcf338732186: Status 404 returned error can't find the container with id 375e267668099914b782984fb7d3a81635ab9544c87c35385f6ddcf338732186 Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.161235 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.168130 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-597tv\" (UniqueName: \"kubernetes.io/projected/27ed85a1-debc-420c-8603-b108f7957a7c-kube-api-access-597tv\") pod \"cluster-image-registry-operator-86c45576b9-qfzzb\" (UID: \"27ed85a1-debc-420c-8603-b108f7957a7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.179105 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa7aa06-a13d-414d-8164-544e84019bab-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6z58r\" (UID: \"0aa7aa06-a13d-414d-8164-544e84019bab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.202271 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.205052 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.205603 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.705588591 +0000 UTC m=+112.795043104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.208618 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-zn588"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.212795 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flnm\" (UniqueName: \"kubernetes.io/projected/9f8d90ce-c290-4192-b7e1-0ca7ce254dbf-kube-api-access-5flnm\") pod \"ingress-canary-tvgxr\" (UID: \"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf\") " pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.215971 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-69cfh" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.216307 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj4cn\" (UniqueName: \"kubernetes.io/projected/02290ceb-1a56-4ebf-9786-e7ab09faf7b7-kube-api-access-cj4cn\") pod \"service-ca-operator-5b9c976747-hm9zj\" (UID: \"02290ceb-1a56-4ebf-9786-e7ab09faf7b7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.233895 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5h5x\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-kube-api-access-x5h5x\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:45 crc kubenswrapper[5122]: W0224 00:10:45.244728 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37148282_9b0e_4952_8e4d_4da50bbc48f7.slice/crio-197b9581deefe28decd7a74619b83d277fc834eb9f98f7114e4efa7b979d9aaf WatchSource:0}: Error finding container 197b9581deefe28decd7a74619b83d277fc834eb9f98f7114e4efa7b979d9aaf: Status 404 returned error can't find the container with id 197b9581deefe28decd7a74619b83d277fc834eb9f98f7114e4efa7b979d9aaf Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.251777 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.258197 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.259745 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.260776 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-m6v2b"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.262860 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwfnj\" (UniqueName: \"kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj\") pod \"marketplace-operator-547dbd544d-5xl2l\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.270200 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.276285 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktzw5\" (UniqueName: \"kubernetes.io/projected/2f4ee5a2-9ca3-4990-896b-c81fe77da971-kube-api-access-ktzw5\") pod \"dns-default-267zx\" (UID: \"2f4ee5a2-9ca3-4990-896b-c81fe77da971\") " pod="openshift-dns/dns-default-267zx" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.283911 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.288008 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.302062 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72pcs\" (UniqueName: \"kubernetes.io/projected/865b2fc7-0d57-48d7-a665-fa9a93257469-kube-api-access-72pcs\") pod \"package-server-manager-77f986bd66-6ccnj\" (UID: \"865b2fc7-0d57-48d7-a665-fa9a93257469\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.304025 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.307824 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.308048 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.808020046 +0000 UTC m=+112.897474569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.308401 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.308981 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.808967013 +0000 UTC m=+112.898421526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.324170 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlr7\" (UniqueName: \"kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7\") pod \"collect-profiles-29531520-j8d8q\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.324328 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.346912 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" event={"ID":"58f519ba-9b81-416e-8f29-0c84e8607ab1","Type":"ContainerStarted","Data":"072a6fdcc62f17f651130910a6d42386ee53cf9d58f3b81f875e515860b25532"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.348442 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89a777c8-8c85-45e5-b60b-6abb996b25f8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-lhtfv\" (UID: \"89a777c8-8c85-45e5-b60b-6abb996b25f8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.348475 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.352310 5122 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-jnnfl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.352384 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.353319 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.355153 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dm88r"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.363499 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" Feb 24 00:10:45 crc kubenswrapper[5122]: W0224 00:10:45.365520 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d407b1a_a260_41a6_a68d_b00b993fb77a.slice/crio-6e70ed983db4de8e658584a4bff4a4fe29e5dc05c5ed142fec065fc729061e57 WatchSource:0}: Error finding container 6e70ed983db4de8e658584a4bff4a4fe29e5dc05c5ed142fec065fc729061e57: Status 404 returned error can't find the container with id 6e70ed983db4de8e658584a4bff4a4fe29e5dc05c5ed142fec065fc729061e57 Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.367082 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-t7d67" event={"ID":"0f4952ce-381c-46b9-b490-3403aa77106e","Type":"ContainerStarted","Data":"73a6f378301c72b13971f100ccdc4057d0c274c77d5fb9cf52458d39a8b4291d"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.368272 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.368590 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7g2z\" (UniqueName: \"kubernetes.io/projected/506b0459-7f41-4507-8377-f1fc79c51113-kube-api-access-d7g2z\") pod \"olm-operator-5cdf44d969-jq4c4\" (UID: \"506b0459-7f41-4507-8377-f1fc79c51113\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.370537 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.376710 5122 patch_prober.go:28] interesting pod/console-operator-67c89758df-t7d67 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.376790 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-t7d67" podUID="0f4952ce-381c-46b9-b490-3403aa77106e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.381241 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" event={"ID":"e8179910-a8d8-4190-89c7-fe04a9f19e86","Type":"ContainerStarted","Data":"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.382185 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.384025 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvn9t\" (UniqueName: \"kubernetes.io/projected/e5ff5c4f-19af-40c2-b4dc-140d9e75bf33-kube-api-access-rvn9t\") pod \"catalog-operator-75ff9f647d-2jgbb\" (UID: \"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.387278 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-qpcf6" event={"ID":"5247eba3-d3c0-4892-a371-f5d13f08c178","Type":"ContainerStarted","Data":"34f039fdca56d903945596a284aabc764a5d7329958f022f40df64db2f5aa266"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.387607 5122 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-b5hst container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.387649 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.389404 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-m6v2b" event={"ID":"a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5","Type":"ContainerStarted","Data":"76f0af7c1852241667cfd5ab25559f7d12a666a0f33066eef1e59da86e11073c"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.396108 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" event={"ID":"d7cba214-7e4b-4e74-9422-9953c7d66961","Type":"ContainerStarted","Data":"00494b244fd3928c2512e0d9f2dc642cb3e6b6e22b6c3f46c4d5ba3c831e1dd6"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.396175 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" event={"ID":"d7cba214-7e4b-4e74-9422-9953c7d66961","Type":"ContainerStarted","Data":"392c6ce2140a250c3889846783e9f120be3d52dfd9cda09041ca3eefcfe46c05"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.399521 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" event={"ID":"36b3c56e-ec77-4507-a2c4-8556b0239225","Type":"ContainerStarted","Data":"71707b15587d62be35b26914938324551ba0b49f5fc6fa3d78a7b035ad1b168f"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.403385 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" event={"ID":"fc07aacc-6c08-4ef3-a058-b6a823315eec","Type":"ContainerStarted","Data":"375e267668099914b782984fb7d3a81635ab9544c87c35385f6ddcf338732186"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.403718 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphkq\" (UniqueName: \"kubernetes.io/projected/51ccf528-5b90-43e8-9e17-d283a0b1723f-kube-api-access-rphkq\") pod \"etcd-operator-69b85846b6-g6n9r\" (UID: \"51ccf528-5b90-43e8-9e17-d283a0b1723f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.409455 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.409986 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:45.909966449 +0000 UTC m=+112.999420962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.413623 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" event={"ID":"4946f9dc-ac73-42d3-b0da-8509903497e0","Type":"ContainerStarted","Data":"df0a4bb1937be922ea28b381d42cc65b29fcc35b4e22e4ba61fb4dd569b01b71"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.413681 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" event={"ID":"4946f9dc-ac73-42d3-b0da-8509903497e0","Type":"ContainerStarted","Data":"38a0ce5380a1a17943765181af2fbe304152408ae9f4374cf1937944881fa11f"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.413693 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" event={"ID":"4946f9dc-ac73-42d3-b0da-8509903497e0","Type":"ContainerStarted","Data":"879feff3a5f3f8518b12519ac8abc37d23426302a4afd21176d7cc42b9e2dd57"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.416459 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.417518 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" event={"ID":"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63","Type":"ContainerStarted","Data":"d25400b7c36110aa3197a9b5d33a6fd737f173463274c994bcf8b382ae54706b"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.418280 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7fw77"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.418447 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" event={"ID":"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480","Type":"ContainerStarted","Data":"21b70ac60d054fd2e8a9a77e9a999922e17a4f842fc8eb8a6ad84672f0b1acd2"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.424975 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.437394 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.438515 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgv8r\" (UniqueName: \"kubernetes.io/projected/4e08c688-1af4-4f0a-9cca-26dbe17bb618-kube-api-access-sgv8r\") pod \"migrator-866fcbc849-spmnw\" (UID: \"4e08c688-1af4-4f0a-9cca-26dbe17bb618\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" Feb 24 00:10:45 crc kubenswrapper[5122]: W0224 00:10:45.444781 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6874d9ce_94e3_4cb2_9741_681f8ea50ec1.slice/crio-bf3b4a6fdab5f3144817d5e5ed20559d2a7fdb06ad750beff0c2f03feb5f674f WatchSource:0}: Error finding container bf3b4a6fdab5f3144817d5e5ed20559d2a7fdb06ad750beff0c2f03feb5f674f: Status 404 returned error can't find the container with id bf3b4a6fdab5f3144817d5e5ed20559d2a7fdb06ad750beff0c2f03feb5f674f Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.449512 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h57f5\" (UniqueName: \"kubernetes.io/projected/10e3bdb7-6f23-4553-8536-bf73e0b2a45c-kube-api-access-h57f5\") pod \"packageserver-7d4fc7d867-q8fpc\" (UID: \"10e3bdb7-6f23-4553-8536-bf73e0b2a45c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.450973 5122 generic.go:358] "Generic (PLEG): container finished" podID="47d73a9e-a36f-42a0-a81b-f3e0c51259e8" containerID="7802a0912aa85bbe7d392f21fc662d3385b0b57f9e9e8bad8462332d90cd9aa4" exitCode=0 Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.451286 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.451312 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" event={"ID":"47d73a9e-a36f-42a0-a81b-f3e0c51259e8","Type":"ContainerDied","Data":"7802a0912aa85bbe7d392f21fc662d3385b0b57f9e9e8bad8462332d90cd9aa4"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.451331 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" event={"ID":"47d73a9e-a36f-42a0-a81b-f3e0c51259e8","Type":"ContainerStarted","Data":"1e108027d53502c0cadfe1b3316fe1f1ec96ab883cf7c28d66168688b3ecbbc4"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.458520 5122 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-lxjqf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.458573 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.459504 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.465036 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" event={"ID":"c4d739bc-bd88-426e-8683-d34b790d5d2f","Type":"ContainerStarted","Data":"911fd78a6e325864671d33cf5418c27d575c79680c3af990c992fcca4996d110"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.469464 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" event={"ID":"37148282-9b0e-4952-8e4d-4da50bbc48f7","Type":"ContainerStarted","Data":"197b9581deefe28decd7a74619b83d277fc834eb9f98f7114e4efa7b979d9aaf"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.470641 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" event={"ID":"8a476700-74f6-4579-b7f8-449e3c4ce746","Type":"ContainerStarted","Data":"297a67a31c89cad2755b57d47bc7ddabd3cecb8a1bbe8d0c63d9f5c594b02493"} Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.471817 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwq6\" (UniqueName: \"kubernetes.io/projected/9cc08205-f0b1-47dc-a44c-da4611ff6b88-kube-api-access-gkwq6\") pod \"dns-operator-799b87ffcd-2k6m5\" (UID: \"9cc08205-f0b1-47dc-a44c-da4611ff6b88\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.480551 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggjb\" (UniqueName: \"kubernetes.io/projected/fca96a93-d382-46a6-81cf-59840b39671e-kube-api-access-pggjb\") pod \"machine-config-controller-f9cdd68f7-qr5vw\" (UID: \"fca96a93-d382-46a6-81cf-59840b39671e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.489811 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.501190 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvgxr" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.501527 5122 request.go:752] "Waited before sending request" delay="2.563378513s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.508617 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-267zx" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.513266 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.515282 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.015264684 +0000 UTC m=+113.104719207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: W0224 00:10:45.519797 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f2d24c5_cbfa_410d_8105_d67830202ff1.slice/crio-4c037fab3653cae2e6c3a4b1b19bccc7caab66c674ce028cd4fc11be0f937f77 WatchSource:0}: Error finding container 4c037fab3653cae2e6c3a4b1b19bccc7caab66c674ce028cd4fc11be0f937f77: Status 404 returned error can't find the container with id 4c037fab3653cae2e6c3a4b1b19bccc7caab66c674ce028cd4fc11be0f937f77 Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.520585 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fntv\" (UniqueName: \"kubernetes.io/projected/1d7b77dd-f3cb-474b-8db4-4a6f9af07a04-kube-api-access-6fntv\") pod \"csi-hostpathplugin-5jfb2\" (UID: \"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04\") " pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.524516 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.530606 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwlj\" (UniqueName: \"kubernetes.io/projected/98c954e4-8a6f-4f90-a365-c781ba1eb8d9-kube-api-access-knwlj\") pod \"multus-admission-controller-69db94689b-4vqwn\" (UID: \"98c954e4-8a6f-4f90-a365-c781ba1eb8d9\") " pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.543295 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.583622 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.595421 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.654737 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.655386 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.655481 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.155456516 +0000 UTC m=+113.244911019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.656167 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.660964 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.661376 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.161351461 +0000 UTC m=+113.250805974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.682490 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.699333 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.744396 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.759004 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.759420 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.259390304 +0000 UTC m=+113.348844817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.759490 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.759789 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.259783325 +0000 UTC m=+113.349237838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.764707 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.786636 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.850221 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.860841 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.861167 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.361141651 +0000 UTC m=+113.450596184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.874292 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.915682 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj"] Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.962760 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:45 crc kubenswrapper[5122]: E0224 00:10:45.963188 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.463171815 +0000 UTC m=+113.552626328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:45 crc kubenswrapper[5122]: I0224 00:10:45.965917 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-267zx"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.064187 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.064438 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.564420618 +0000 UTC m=+113.653875131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.120594 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" podStartSLOduration=90.120569649 podStartE2EDuration="1m30.120569649s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.114049196 +0000 UTC m=+113.203503739" watchObservedRunningTime="2026-02-24 00:10:46.120569649 +0000 UTC m=+113.210024162" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.164991 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.165416 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.665398383 +0000 UTC m=+113.754852896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.235630 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.252418 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.257296 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" podStartSLOduration=90.257271563 podStartE2EDuration="1m30.257271563s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.242965233 +0000 UTC m=+113.332419746" watchObservedRunningTime="2026-02-24 00:10:46.257271563 +0000 UTC m=+113.346726066" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.266657 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.267000 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.766972684 +0000 UTC m=+113.856427197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.267889 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.268213 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.768204639 +0000 UTC m=+113.857659152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.291897 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvgxr"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.357835 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.372944 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.373104 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.873054962 +0000 UTC m=+113.962509475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.373281 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.373584 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.873570697 +0000 UTC m=+113.963025200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.380027 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.475011 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.475795 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:46.975774296 +0000 UTC m=+114.065228809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.534370 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.534474 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" event={"ID":"fc07aacc-6c08-4ef3-a058-b6a823315eec","Type":"ContainerStarted","Data":"3c05953d9856bc2eef55edffae997b820b3dfabe427cb1e0e580cb9eaac1a4f0"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.556254 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" event={"ID":"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33","Type":"ContainerStarted","Data":"1605199ecc24e0654dfad459d400b3baf0d49a4eb7d114ec451b1a250ff05708"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.556473 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-t7d67" podStartSLOduration=90.556459823 podStartE2EDuration="1m30.556459823s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.52846937 +0000 UTC m=+113.617923893" watchObservedRunningTime="2026-02-24 00:10:46.556459823 +0000 UTC m=+113.645914336" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.557343 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" event={"ID":"865b2fc7-0d57-48d7-a665-fa9a93257469","Type":"ContainerStarted","Data":"bbf514306f5ce2676c6b46a929836df6e58bafb1cf5a3d3873746691cad83c19"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.564643 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7fw77" event={"ID":"0f2d24c5-cbfa-410d-8105-d67830202ff1","Type":"ContainerStarted","Data":"4c037fab3653cae2e6c3a4b1b19bccc7caab66c674ce028cd4fc11be0f937f77"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.571216 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.571454 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" podStartSLOduration=90.571441342 podStartE2EDuration="1m30.571441342s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.567599945 +0000 UTC m=+113.657054478" watchObservedRunningTime="2026-02-24 00:10:46.571441342 +0000 UTC m=+113.660895855" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.571737 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" event={"ID":"6d407b1a-a260-41a6-a68d-b00b993fb77a","Type":"ContainerStarted","Data":"6e70ed983db4de8e658584a4bff4a4fe29e5dc05c5ed142fec065fc729061e57"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.576538 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.576831 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.076817573 +0000 UTC m=+114.166272086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.577614 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" event={"ID":"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63","Type":"ContainerStarted","Data":"0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.579872 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" event={"ID":"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480","Type":"ContainerStarted","Data":"df8ef7cedba3c1a47e5b317c77de3505a49478bd65149729402e1f4bfe2559a4"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.580730 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-267zx" event={"ID":"2f4ee5a2-9ca3-4990-896b-c81fe77da971","Type":"ContainerStarted","Data":"3341ad706c9c55b4ab21377ef1402f2012805a9562a8c930ce07d533e941e270"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.582900 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerStarted","Data":"88566cd16bbc2bb5c4ea700adc8d078ccbc82f5da0bde2d8891392ef3aaa0324"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.584950 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" event={"ID":"ea6a33f4-db81-4724-8502-c62734961fc8","Type":"ContainerStarted","Data":"271c9d70664becf26b114e2dbb331e21ec69dc955d1ea5cd0462d32db61c380e"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.585906 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" event={"ID":"c4d739bc-bd88-426e-8683-d34b790d5d2f","Type":"ContainerStarted","Data":"f5d2c913c7336eb0378c763aeea4f4d5b3b3d6f4877dabb27c317d2fef31d790"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.586958 5122 generic.go:358] "Generic (PLEG): container finished" podID="37148282-9b0e-4952-8e4d-4da50bbc48f7" containerID="3f1ce7294416bf2c34aad48c1a692f729df863fc3f8bcdbb1b1e0d55f6056948" exitCode=0 Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.587005 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" event={"ID":"37148282-9b0e-4952-8e4d-4da50bbc48f7","Type":"ContainerDied","Data":"3f1ce7294416bf2c34aad48c1a692f729df863fc3f8bcdbb1b1e0d55f6056948"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.606793 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.611740 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" event={"ID":"8a476700-74f6-4579-b7f8-449e3c4ce746","Type":"ContainerStarted","Data":"9231c768e4e50b6df7bcf5ae265a9c7ec054ecc91418ae58779b90772abf0b66"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.614698 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvgxr" event={"ID":"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf","Type":"ContainerStarted","Data":"d4d607d7763fa2347bab586caa4a33ea1c98c688f22cb44a2eb5d01951735f1d"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.620518 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" event={"ID":"506b0459-7f41-4507-8377-f1fc79c51113","Type":"ContainerStarted","Data":"b7e662b11d6c364c0ddf1df7b1f60440ae77ebab5f75419cff62b96c3973e7c3"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.626413 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-69cfh" event={"ID":"33257dc3-785e-4b1e-9087-01a0cb290b5c","Type":"ContainerStarted","Data":"4ac053c25574f380ee08990dbb4d0ac879463bfbb7148abf522e9e6d9f1d2a2d"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.637152 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" event={"ID":"e8537760-49d8-4e35-9333-65c360424b0d","Type":"ContainerStarted","Data":"3836527b66974c2ae9ec69849edd51d073ac747b18387a136607fa3128d07377"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.641555 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" event={"ID":"c515c9f9-2b46-41e2-ae64-abfbafbac0fa","Type":"ContainerStarted","Data":"b49ca79e658c1ee3975190590215d199a33005d7b0463dc3a44811d502c70b90"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.644674 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dm88r" event={"ID":"6874d9ce-94e3-4cb2-9741-681f8ea50ec1","Type":"ContainerStarted","Data":"bf3b4a6fdab5f3144817d5e5ed20559d2a7fdb06ad750beff0c2f03feb5f674f"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.657837 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" event={"ID":"27ed85a1-debc-420c-8603-b108f7957a7c","Type":"ContainerStarted","Data":"650fd2dbc550387a997c20e0900e3a1e398ba21c2ab389fa4c30a4253c25978f"} Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.683202 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.683293 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.183273431 +0000 UTC m=+114.272727944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.683991 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.686631 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.686953 5122 patch_prober.go:28] interesting pod/console-operator-67c89758df-t7d67 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.687012 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-t7d67" podUID="0f4952ce-381c-46b9-b490-3403aa77106e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.686953 5122 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-lxjqf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.687093 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.689666 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.189648359 +0000 UTC m=+114.279102872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.714713 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" podStartSLOduration=90.71469823 podStartE2EDuration="1m30.71469823s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.676315306 +0000 UTC m=+113.765769829" watchObservedRunningTime="2026-02-24 00:10:46.71469823 +0000 UTC m=+113.804152743" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.732058 5122 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-jnnfl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.732132 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.732756 5122 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-b5hst container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.732806 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.785572 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.787369 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.287348363 +0000 UTC m=+114.376802876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.802800 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29531520-qpcf6" podStartSLOduration=91.802780274 podStartE2EDuration="1m31.802780274s" podCreationTimestamp="2026-02-24 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.798641689 +0000 UTC m=+113.888096222" watchObservedRunningTime="2026-02-24 00:10:46.802780274 +0000 UTC m=+113.892234787" Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.811164 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2k6m5"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.819319 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.819943 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.839806 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.840592 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.855783 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.856762 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4vqwn"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.887496 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.887856 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.387839244 +0000 UTC m=+114.477293757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: W0224 00:10:46.889831 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e3bdb7_6f23_4553_8536_bf73e0b2a45c.slice/crio-e10ce8320499c114515e730e796420c4aeddf8d0beab4e8c89fe889a41b431ef WatchSource:0}: Error finding container e10ce8320499c114515e730e796420c4aeddf8d0beab4e8c89fe889a41b431ef: Status 404 returned error can't find the container with id e10ce8320499c114515e730e796420c4aeddf8d0beab4e8c89fe889a41b431ef Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.930865 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jfb2"] Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.937760 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw"] Feb 24 00:10:46 crc kubenswrapper[5122]: W0224 00:10:46.948732 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfca96a93_d382_46a6_81cf_59840b39671e.slice/crio-40e6f09c6b26e406755c390401dcc03cf1dc0165aec4d3afd5bd0e8374ecb409 WatchSource:0}: Error finding container 40e6f09c6b26e406755c390401dcc03cf1dc0165aec4d3afd5bd0e8374ecb409: Status 404 returned error can't find the container with id 40e6f09c6b26e406755c390401dcc03cf1dc0165aec4d3afd5bd0e8374ecb409 Feb 24 00:10:46 crc kubenswrapper[5122]: W0224 00:10:46.949283 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d7b77dd_f3cb_474b_8db4_4a6f9af07a04.slice/crio-5fd27c2e62ad6ecacc995395f813e26bc51cbc3d27c7da9aba4407749de973cf WatchSource:0}: Error finding container 5fd27c2e62ad6ecacc995395f813e26bc51cbc3d27c7da9aba4407749de973cf: Status 404 returned error can't find the container with id 5fd27c2e62ad6ecacc995395f813e26bc51cbc3d27c7da9aba4407749de973cf Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.988558 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.988928 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.488907531 +0000 UTC m=+114.578362044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:46 crc kubenswrapper[5122]: I0224 00:10:46.989069 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:46 crc kubenswrapper[5122]: E0224 00:10:46.989350 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.489343544 +0000 UTC m=+114.578798057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.072975 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-vbjdh" podStartSLOduration=91.072955863 podStartE2EDuration="1m31.072955863s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.07143871 +0000 UTC m=+114.160893243" watchObservedRunningTime="2026-02-24 00:10:47.072955863 +0000 UTC m=+114.162410386" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.094395 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.094528 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.594498146 +0000 UTC m=+114.683952659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.094904 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.095269 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.595255697 +0000 UTC m=+114.684710210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.110661 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.117474 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.117543 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.195862 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.196245 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.696228912 +0000 UTC m=+114.785683415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.297249 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.297718 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.7977022 +0000 UTC m=+114.887156713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.398793 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.399192 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:47.899157399 +0000 UTC m=+114.988611922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.500758 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.501313 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.001292756 +0000 UTC m=+115.090747259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.512891 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-btsbr" podStartSLOduration=91.51286715 podStartE2EDuration="1m31.51286715s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.512284444 +0000 UTC m=+114.601738977" watchObservedRunningTime="2026-02-24 00:10:47.51286715 +0000 UTC m=+114.602321653" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.513477 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vtw97" podStartSLOduration=91.513472167 podStartE2EDuration="1m31.513472167s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.478786787 +0000 UTC m=+114.568241300" watchObservedRunningTime="2026-02-24 00:10:47.513472167 +0000 UTC m=+114.602926680" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.552708 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.553631 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.559227 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podStartSLOduration=91.559199136 podStartE2EDuration="1m31.559199136s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.55720657 +0000 UTC m=+114.646661103" watchObservedRunningTime="2026-02-24 00:10:47.559199136 +0000 UTC m=+114.648653649" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.618964 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.619218 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.119178784 +0000 UTC m=+115.208633307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.619660 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.620080 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.120067499 +0000 UTC m=+115.209522012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.658731 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" podStartSLOduration=5.65870963 podStartE2EDuration="5.65870963s" podCreationTimestamp="2026-02-24 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.657187757 +0000 UTC m=+114.746642290" watchObservedRunningTime="2026-02-24 00:10:47.65870963 +0000 UTC m=+114.748164143" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.720749 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.720915 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.22088702 +0000 UTC m=+115.310341543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.721309 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.721668 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.221653061 +0000 UTC m=+115.311107574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.751732 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" event={"ID":"47d73a9e-a36f-42a0-a81b-f3e0c51259e8","Type":"ContainerStarted","Data":"0337827b614ca76daef7f06e78e185f4a75b60efc1e8e421635df03e6e29fb37"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.757005 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" event={"ID":"10e3bdb7-6f23-4553-8536-bf73e0b2a45c","Type":"ContainerStarted","Data":"e10ce8320499c114515e730e796420c4aeddf8d0beab4e8c89fe889a41b431ef"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.761903 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.765289 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" event={"ID":"51ccf528-5b90-43e8-9e17-d283a0b1723f","Type":"ContainerStarted","Data":"d0eadbbb6fb3be4b41068649eda3957e84abf1e2fbeef019c10e8ea76e71c0f5"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.771606 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" event={"ID":"38c87ba7-0787-425a-ab2c-5c5069cc14d3","Type":"ContainerStarted","Data":"4ed8c8123175766d0e3dd281942111717416f2b92cd79bf4e875f1cd6425cd98"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.773427 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" event={"ID":"91922081-9786-47ef-ad37-7d1092f63918","Type":"ContainerStarted","Data":"c22576154f6f56e5ba0f613296275eb21d341bdf5271f9ad35ef4c2bb1456d2c"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.793571 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" event={"ID":"c4d739bc-bd88-426e-8683-d34b790d5d2f","Type":"ContainerStarted","Data":"fdbde560e5020ee5d6363a7493d99f16aa83808cf579d0d22567cda53d018d44"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.793632 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-69cfh" event={"ID":"33257dc3-785e-4b1e-9087-01a0cb290b5c","Type":"ContainerStarted","Data":"d380b7e795ab9d90570e62d5242668bbdb10216746d4ca66309db99f7e7b951c"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.793653 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" event={"ID":"e8537760-49d8-4e35-9333-65c360424b0d","Type":"ContainerStarted","Data":"aefd80b7afa459cfcd767271e4efcc5f8ed189e9edfdb97a130d2632f80e2c03"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.793676 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" event={"ID":"27ed85a1-debc-420c-8603-b108f7957a7c","Type":"ContainerStarted","Data":"893b789fa89e2747971c1e4cf9e3fe46c6aa33ef7dc6624a79a9f7026e990bdc"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.793697 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" event={"ID":"98c954e4-8a6f-4f90-a365-c781ba1eb8d9","Type":"ContainerStarted","Data":"be235b1a84cd69b2a9d93e136830b548d351a2409fc87082d317784a84582101"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.794590 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" event={"ID":"9cc08205-f0b1-47dc-a44c-da4611ff6b88","Type":"ContainerStarted","Data":"93f93221e3475827faa739a8792971f28d1dca5e7c5c6691dd8d61003398f61d"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.805284 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" event={"ID":"02290ceb-1a56-4ebf-9786-e7ab09faf7b7","Type":"ContainerStarted","Data":"997f061d92ce0314ecbc5ed6f29f95f37d66c11a1bc25b59771297ab7ca6baa5"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.828730 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.828933 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.328907502 +0000 UTC m=+115.418362025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.829174 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.829537 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.329524029 +0000 UTC m=+115.418978552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.836623 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" event={"ID":"865b2fc7-0d57-48d7-a665-fa9a93257469","Type":"ContainerStarted","Data":"edd7a3f74dbecaabefc3359bd49ed11ae63952d063f9da374779d50844a8416b"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.853566 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" event={"ID":"8e88e04e-2e6c-45d3-97fe-d49d5fd9f480","Type":"ContainerStarted","Data":"1e815bdc5e251cdec0016f0dc3ecf4e76ae80efd7abe6964606b9efe06dd728e"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.871524 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-267zx" event={"ID":"2f4ee5a2-9ca3-4990-896b-c81fe77da971","Type":"ContainerStarted","Data":"7dad54322f25cd8281bba8d7630095f214478bb5cd9a3f4c292a99a0a7f0dd8f"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.880187 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" event={"ID":"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04","Type":"ContainerStarted","Data":"5fd27c2e62ad6ecacc995395f813e26bc51cbc3d27c7da9aba4407749de973cf"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.884673 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" event={"ID":"ea6a33f4-db81-4724-8502-c62734961fc8","Type":"ContainerStarted","Data":"f0955a36377fb65488f6b66661611abe75123022b8c906256d77cb581fa2cbe3"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.890109 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" event={"ID":"fca96a93-d382-46a6-81cf-59840b39671e","Type":"ContainerStarted","Data":"40e6f09c6b26e406755c390401dcc03cf1dc0165aec4d3afd5bd0e8374ecb409"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.897346 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" event={"ID":"89a777c8-8c85-45e5-b60b-6abb996b25f8","Type":"ContainerStarted","Data":"a3bae19d82bfa89a681da7bd7eb0b50b3c10c83a8c58765ffdaab8061dcf89ab"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.899348 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" event={"ID":"f0657a36-859b-4454-8940-c1b68b1161c6","Type":"ContainerStarted","Data":"ebea5b51bc9388e1110506d21a0339ed9d61443a353f167a59d6c0f7d121e85f"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.903496 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" event={"ID":"c515c9f9-2b46-41e2-ae64-abfbafbac0fa","Type":"ContainerStarted","Data":"9f574c9a411b1ba6e568256c23e0a4870b63d6da54d2a09243956f8536258d02"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.905405 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dm88r" event={"ID":"6874d9ce-94e3-4cb2-9741-681f8ea50ec1","Type":"ContainerStarted","Data":"aab308918c95d329a8944e6081cf71b8df8277b233e52df5ef90ca8f18399314"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.906773 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-m6v2b" event={"ID":"a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5","Type":"ContainerStarted","Data":"93f158ca3e6828eb9115338a6c515ad018959cb1ee98fca4b97639dc6f49fb13"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.908430 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" event={"ID":"0aa7aa06-a13d-414d-8164-544e84019bab","Type":"ContainerStarted","Data":"6904b51b86b6a95eafefae33620ec7165c4b1f55b121093a79ecd9799c02fdad"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.913563 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7fw77" event={"ID":"0f2d24c5-cbfa-410d-8105-d67830202ff1","Type":"ContainerStarted","Data":"514bd83c9e4ee5f7bca19f9e3b7cf42fbc6bc562e08c8abd4b18df92f2f1ef28"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.916147 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" event={"ID":"6d407b1a-a260-41a6-a68d-b00b993fb77a","Type":"ContainerStarted","Data":"1c370b430917e0124bad50348c2cb41d17adebaef9f23f86a9f9826a262574a9"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.922004 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.924477 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.924534 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.939345 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:47 crc kubenswrapper[5122]: E0224 00:10:47.940622 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.440600676 +0000 UTC m=+115.530055189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.943064 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" event={"ID":"4e08c688-1af4-4f0a-9cca-26dbe17bb618","Type":"ContainerStarted","Data":"12d655846cfba4906e015e733fed5ed768e59525ec74fadaccb17f1b33597813"} Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.959858 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qfzzb" podStartSLOduration=91.959831164 podStartE2EDuration="1m31.959831164s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.948455556 +0000 UTC m=+115.037910099" watchObservedRunningTime="2026-02-24 00:10:47.959831164 +0000 UTC m=+115.049285677" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.961786 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:10:47 crc kubenswrapper[5122]: I0224 00:10:47.980006 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-69cfh" podStartSLOduration=5.979988978 podStartE2EDuration="5.979988978s" podCreationTimestamp="2026-02-24 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:47.979762192 +0000 UTC m=+115.069216735" watchObservedRunningTime="2026-02-24 00:10:47.979988978 +0000 UTC m=+115.069443501" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.014812 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" podStartSLOduration=92.014795932 podStartE2EDuration="1m32.014795932s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.012440316 +0000 UTC m=+115.101894839" watchObservedRunningTime="2026-02-24 00:10:48.014795932 +0000 UTC m=+115.104250445" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.022913 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.040630 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.041403 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.541386396 +0000 UTC m=+115.630840909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.054105 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4fklm" podStartSLOduration=92.05406067 podStartE2EDuration="1m32.05406067s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.052720383 +0000 UTC m=+115.142174916" watchObservedRunningTime="2026-02-24 00:10:48.05406067 +0000 UTC m=+115.143515193" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.247727 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.249305 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.749276435 +0000 UTC m=+115.838730948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.255108 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.255158 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.255221 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.255783 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.258067 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.75805269 +0000 UTC m=+115.847507203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.292007 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.309247 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcf48" podStartSLOduration=94.309228519 podStartE2EDuration="1m34.309228519s" podCreationTimestamp="2026-02-24 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.286942756 +0000 UTC m=+115.376397279" watchObservedRunningTime="2026-02-24 00:10:48.309228519 +0000 UTC m=+115.398683032" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.371681 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.372543 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.8725232 +0000 UTC m=+115.961977713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.451124 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-4frxv" podStartSLOduration=92.451104288 podStartE2EDuration="1m32.451104288s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.443704561 +0000 UTC m=+115.533159074" watchObservedRunningTime="2026-02-24 00:10:48.451104288 +0000 UTC m=+115.540558791" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.451483 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-79flb" podStartSLOduration=92.451474859 podStartE2EDuration="1m32.451474859s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.391043728 +0000 UTC m=+115.480498271" watchObservedRunningTime="2026-02-24 00:10:48.451474859 +0000 UTC m=+115.540929402" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.471318 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-dm88r" podStartSLOduration=92.471303083 podStartE2EDuration="1m32.471303083s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.470580263 +0000 UTC m=+115.560034776" watchObservedRunningTime="2026-02-24 00:10:48.471303083 +0000 UTC m=+115.560757596" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.481389 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.481735 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:48.981721205 +0000 UTC m=+116.071175718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.497755 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-n87c7" podStartSLOduration=92.497734443 podStartE2EDuration="1m32.497734443s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.496348104 +0000 UTC m=+115.585802617" watchObservedRunningTime="2026-02-24 00:10:48.497734443 +0000 UTC m=+115.587188956" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.522771 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-m6v2b" podStartSLOduration=92.522749963 podStartE2EDuration="1m32.522749963s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.517683001 +0000 UTC m=+115.607137534" watchObservedRunningTime="2026-02-24 00:10:48.522749963 +0000 UTC m=+115.612204466" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.586690 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.587161 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.087145034 +0000 UTC m=+116.176599547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.604373 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7fw77" podStartSLOduration=92.604356936 podStartE2EDuration="1m32.604356936s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.560161979 +0000 UTC m=+115.649616492" watchObservedRunningTime="2026-02-24 00:10:48.604356936 +0000 UTC m=+115.693811449" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.688909 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.689340 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.189323023 +0000 UTC m=+116.278777536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.790172 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.790454 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.290436551 +0000 UTC m=+116.379891064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.892957 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.893428 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.393408181 +0000 UTC m=+116.482862704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.906584 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-t7d67" Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.964505 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46xbn"] Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.991712 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerStarted","Data":"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e"} Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.994276 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:48 crc kubenswrapper[5122]: E0224 00:10:48.994634 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.494618423 +0000 UTC m=+116.584072936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.996383 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" event={"ID":"f0657a36-859b-4454-8940-c1b68b1161c6","Type":"ContainerStarted","Data":"26cfc58a05617d9dfe9b3db922b6e389bea7ecec260390ebdb4045d091c3aec7"} Feb 24 00:10:48 crc kubenswrapper[5122]: I0224 00:10:48.999864 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" event={"ID":"506b0459-7f41-4507-8377-f1fc79c51113","Type":"ContainerStarted","Data":"48ec034e5bf3c538b0b68811e1050db51776713869de87c806dd84972b24de75"} Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.002427 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" event={"ID":"e5ff5c4f-19af-40c2-b4dc-140d9e75bf33","Type":"ContainerStarted","Data":"18d2a65ca7201b0c2f6e03482f4bd1d2300f97cc38d04916fa87ae442fd53598"} Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.009895 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" event={"ID":"38c87ba7-0787-425a-ab2c-5c5069cc14d3","Type":"ContainerStarted","Data":"d80e194757fe8412e6681706a46602ca0df581917fb8e8f7d56cacd6b1f78c0d"} Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.014107 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvgxr" event={"ID":"9f8d90ce-c290-4192-b7e1-0ca7ce254dbf","Type":"ContainerStarted","Data":"3189f21f85aa5f0f6729e5cb8166d136b65715f7959a0df211f35933d623cdc7"} Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.096086 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.096450 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.596437831 +0000 UTC m=+116.685892344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.161664 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:49 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:49 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:49 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.161929 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.175853 5122 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-rdpqq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]log ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]etcd ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/generic-apiserver-start-informers ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/max-in-flight-filter ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 24 00:10:49 crc kubenswrapper[5122]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 24 00:10:49 crc kubenswrapper[5122]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/project.openshift.io-projectcache ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/openshift.io-startinformers ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 24 00:10:49 crc kubenswrapper[5122]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 24 00:10:49 crc kubenswrapper[5122]: livez check failed Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.175929 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" podUID="2cbfc0dc-07e4-45f0-b8f6-36bc89e8da02" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.197271 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.197638 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.697617822 +0000 UTC m=+116.787072335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.299502 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.299815 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.799801911 +0000 UTC m=+116.889256424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.400745 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.401141 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.901099615 +0000 UTC m=+116.990554158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.401672 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.402213 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:49.902197835 +0000 UTC m=+116.991652358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.503246 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.503364 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.003345895 +0000 UTC m=+117.092800398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.503585 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.503831 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.003822668 +0000 UTC m=+117.093277191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.604649 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.604770 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.104750982 +0000 UTC m=+117.194205495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.604992 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.605339 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.105329268 +0000 UTC m=+117.194783781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.706909 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.707165 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.207126816 +0000 UTC m=+117.296581329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.707639 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.708057 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.208040532 +0000 UTC m=+117.297495045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.808730 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.808918 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.308883563 +0000 UTC m=+117.398338076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.809113 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.809427 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.309412488 +0000 UTC m=+117.398867001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.910810 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.911154 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.411099073 +0000 UTC m=+117.500553646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:49 crc kubenswrapper[5122]: I0224 00:10:49.911571 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:49 crc kubenswrapper[5122]: E0224 00:10:49.911952 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.411940726 +0000 UTC m=+117.501395239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.013249 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.013398 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.513370784 +0000 UTC m=+117.602825297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.013715 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.014071 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.514051943 +0000 UTC m=+117.603506466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.020344 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" event={"ID":"37148282-9b0e-4952-8e4d-4da50bbc48f7","Type":"ContainerStarted","Data":"ed3ad771ca44bba8a6ba7e147923ffe9bd61d3823c2cd80ee0ef84d02bad1bcd"} Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.021822 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" event={"ID":"91922081-9786-47ef-ad37-7d1092f63918","Type":"ContainerStarted","Data":"2c6af9c07eef5e92014d019f0e749901d4ad7e3400c2ddc9334346e20b28d3cc"} Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.087453 5122 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-gcvhv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.087520 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" podUID="47d73a9e-a36f-42a0-a81b-f3e0c51259e8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.113446 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:50 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:50 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:50 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.113495 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.114436 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.114623 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.614571745 +0000 UTC m=+117.704026268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.115496 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.115815 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.615800169 +0000 UTC m=+117.705254672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.168095 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" podStartSLOduration=94.168058511 podStartE2EDuration="1m34.168058511s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.149253325 +0000 UTC m=+117.238707868" watchObservedRunningTime="2026-02-24 00:10:50.168058511 +0000 UTC m=+117.257513034" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.169474 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2pxbg" podStartSLOduration=94.169465551 podStartE2EDuration="1m34.169465551s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.167717292 +0000 UTC m=+117.257171815" watchObservedRunningTime="2026-02-24 00:10:50.169465551 +0000 UTC m=+117.258920064" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.221808 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.222037 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.722009631 +0000 UTC m=+117.811464144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.224794 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.225271 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.725258332 +0000 UTC m=+117.814712845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.313632 5122 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-gcvhv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.313694 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" podUID="47d73a9e-a36f-42a0-a81b-f3e0c51259e8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.318231 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.318465 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.327554 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.327738 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.827722178 +0000 UTC m=+117.917176691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.327945 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.328271 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.828261063 +0000 UTC m=+117.917715576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.381501 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" podStartSLOduration=94.381480992 podStartE2EDuration="1m34.381480992s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.368762456 +0000 UTC m=+117.458216969" watchObservedRunningTime="2026-02-24 00:10:50.381480992 +0000 UTC m=+117.470935505" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.396210 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.414258 5122 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-jq4c4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.414332 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" podUID="506b0459-7f41-4507-8377-f1fc79c51113" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.433704 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.433854 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.933836127 +0000 UTC m=+118.023290640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.443389 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.445138 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tvgxr" podStartSLOduration=8.445092362 podStartE2EDuration="8.445092362s" podCreationTimestamp="2026-02-24 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.394133516 +0000 UTC m=+117.483588029" watchObservedRunningTime="2026-02-24 00:10:50.445092362 +0000 UTC m=+117.534546885" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.445669 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:50.945651697 +0000 UTC m=+118.035106210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.458910 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tl7gq" podStartSLOduration=94.458881907 podStartE2EDuration="1m34.458881907s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.432305884 +0000 UTC m=+117.521760407" watchObservedRunningTime="2026-02-24 00:10:50.458881907 +0000 UTC m=+117.548336490" Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.544697 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.544988 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.044972586 +0000 UTC m=+118.134427099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.645836 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.646245 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.146209188 +0000 UTC m=+118.235663701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.747104 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.747304 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.247276806 +0000 UTC m=+118.336731319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.747521 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.747875 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.247858612 +0000 UTC m=+118.337313125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.848712 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.849319 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.349284109 +0000 UTC m=+118.438738622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.849611 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.850022 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.35000805 +0000 UTC m=+118.439462563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:50 crc kubenswrapper[5122]: I0224 00:10:50.950803 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:50 crc kubenswrapper[5122]: E0224 00:10:50.951257 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.451240062 +0000 UTC m=+118.540694575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.040705 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" event={"ID":"02290ceb-1a56-4ebf-9786-e7ab09faf7b7","Type":"ContainerStarted","Data":"347c04b6017b11877f7027a5bb6f09978be1c48243ef7c28d58f174688eec217"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.045766 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" event={"ID":"865b2fc7-0d57-48d7-a665-fa9a93257469","Type":"ContainerStarted","Data":"e838d5f54fcbd1ca5767674e15f810be79db020646bd4c50ed236728bc62ca69"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.045898 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.048310 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-267zx" event={"ID":"2f4ee5a2-9ca3-4990-896b-c81fe77da971","Type":"ContainerStarted","Data":"b76f4a3d5a4675866d02a72b22a00e9dadd8bb0fd19f605395d9de107ea491d8"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.049171 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-267zx" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.052256 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.052894 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" event={"ID":"fca96a93-d382-46a6-81cf-59840b39671e","Type":"ContainerStarted","Data":"cdc53ad45b3d5df9c83eac187d8aa900de77b0b7c6730304f845f55fa938e399"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.052945 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" event={"ID":"fca96a93-d382-46a6-81cf-59840b39671e","Type":"ContainerStarted","Data":"5ad97968591ba2c3afbe8ac8669d196179ef8172bf6af531c143f3a504b390dd"} Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.053061 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.553028839 +0000 UTC m=+118.642483352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.055643 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" event={"ID":"89a777c8-8c85-45e5-b60b-6abb996b25f8","Type":"ContainerStarted","Data":"e731f28f8104b5fd241b66c44f58b6af97843127b8a20322123b9bbb8e89059a"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.055681 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" event={"ID":"89a777c8-8c85-45e5-b60b-6abb996b25f8","Type":"ContainerStarted","Data":"dde24f5ea423d763d86b7b92e1d93c01b5864c68dd054c783f535cd64e0b0495"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.059519 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" event={"ID":"c515c9f9-2b46-41e2-ae64-abfbafbac0fa","Type":"ContainerStarted","Data":"808c58623a541fd0c3958d98ae83a21217c4d36836f97db0d834f05c4335b73a"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.062182 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" event={"ID":"0aa7aa06-a13d-414d-8164-544e84019bab","Type":"ContainerStarted","Data":"cf304ea1a48e49a5ed23ba3d46e1cbc5aea5e19bcbb364521d62828d5bbe5e21"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.065265 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hm9zj" podStartSLOduration=95.065249071 podStartE2EDuration="1m35.065249071s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.060313443 +0000 UTC m=+118.149767956" watchObservedRunningTime="2026-02-24 00:10:51.065249071 +0000 UTC m=+118.154703584" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.065925 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" event={"ID":"4e08c688-1af4-4f0a-9cca-26dbe17bb618","Type":"ContainerStarted","Data":"b6e41b9b902842ba4923dad3b133b1214d92528809ee25fb8424cdc25b3c8628"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.065982 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" event={"ID":"4e08c688-1af4-4f0a-9cca-26dbe17bb618","Type":"ContainerStarted","Data":"7f2640863d3ca0d8eebb189cbbb97bf69a02fab765243e6c117f9ab6760d5944"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.067044 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" podStartSLOduration=95.067036851 podStartE2EDuration="1m35.067036851s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:50.482293822 +0000 UTC m=+117.571748345" watchObservedRunningTime="2026-02-24 00:10:51.067036851 +0000 UTC m=+118.156491364" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.070201 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" event={"ID":"10e3bdb7-6f23-4553-8536-bf73e0b2a45c","Type":"ContainerStarted","Data":"6023bc896ddb74642517d28f14592b7ec2097e40618a2a823bec2c7372b93bdd"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.071346 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.073612 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.075227 5122 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-q8fpc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.075304 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" podUID="10e3bdb7-6f23-4553-8536-bf73e0b2a45c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.088708 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" event={"ID":"51ccf528-5b90-43e8-9e17-d283a0b1723f","Type":"ContainerStarted","Data":"b7537204677e49d0b4fae2a43e1401e8bd3133c590cc62592ef50c6a3718ae85"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.089002 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.091341 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.091436 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.098497 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dfp46" podStartSLOduration=95.098481791 podStartE2EDuration="1m35.098481791s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.096014322 +0000 UTC m=+118.185468855" watchObservedRunningTime="2026-02-24 00:10:51.098481791 +0000 UTC m=+118.187936304" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.104215 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" event={"ID":"98c954e4-8a6f-4f90-a365-c781ba1eb8d9","Type":"ContainerStarted","Data":"9ab23fc0b8e0f60d2453bf167f2f145c9c29048d425314bdd7ca90f2d6683b11"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.104270 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" event={"ID":"98c954e4-8a6f-4f90-a365-c781ba1eb8d9","Type":"ContainerStarted","Data":"df4a0935eb6d1dd304bd1f0593632dd8dbbadd0b3be836600c613105cbd2df4d"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.105491 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.117257 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:51 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:51 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:51 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.117351 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.120758 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" event={"ID":"9cc08205-f0b1-47dc-a44c-da4611ff6b88","Type":"ContainerStarted","Data":"539ef3ae4e1bd6979f4dcdf923d9508e3d6918fd305708ba19877297724b761e"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.120818 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" event={"ID":"9cc08205-f0b1-47dc-a44c-da4611ff6b88","Type":"ContainerStarted","Data":"7eeff6f19079926b75a8f26c55eec85c75cd27e65762b3901b1d54c698172113"} Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.121435 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.121488 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.122130 5122 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-jq4c4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.122186 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" podUID="506b0459-7f41-4507-8377-f1fc79c51113" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.123159 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.123189 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.123322 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" gracePeriod=30 Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.124499 5122 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5xl2l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.124561 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.130530 5122 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-2jgbb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.130592 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" podUID="e5ff5c4f-19af-40c2-b4dc-140d9e75bf33" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.154523 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-267zx" podStartSLOduration=9.154499518 podStartE2EDuration="9.154499518s" podCreationTimestamp="2026-02-24 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.136551006 +0000 UTC m=+118.226005539" watchObservedRunningTime="2026-02-24 00:10:51.154499518 +0000 UTC m=+118.243954031" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.154637 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.154714 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-qr5vw" podStartSLOduration=95.154710164 podStartE2EDuration="1m35.154710164s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.154240531 +0000 UTC m=+118.243695044" watchObservedRunningTime="2026-02-24 00:10:51.154710164 +0000 UTC m=+118.244164677" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.154863 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.155583 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.156430 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.656381151 +0000 UTC m=+118.745835664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.182193 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-lhtfv" podStartSLOduration=95.182176152 podStartE2EDuration="1m35.182176152s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.181770101 +0000 UTC m=+118.271224614" watchObservedRunningTime="2026-02-24 00:10:51.182176152 +0000 UTC m=+118.271630665" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.209596 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" podStartSLOduration=95.209572379 podStartE2EDuration="1m35.209572379s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.207182542 +0000 UTC m=+118.296637065" watchObservedRunningTime="2026-02-24 00:10:51.209572379 +0000 UTC m=+118.299026892" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.257064 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.257150 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.257210 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.257351 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.257658 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.757643654 +0000 UTC m=+118.847098167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.268601 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-2k6m5" podStartSLOduration=95.26858388 podStartE2EDuration="1m35.26858388s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.238701314 +0000 UTC m=+118.328155847" watchObservedRunningTime="2026-02-24 00:10:51.26858388 +0000 UTC m=+118.358038403" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.270662 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" podStartSLOduration=95.270652498 podStartE2EDuration="1m35.270652498s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.267713905 +0000 UTC m=+118.357168448" watchObservedRunningTime="2026-02-24 00:10:51.270652498 +0000 UTC m=+118.360107011" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.287166 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-g6n9r" podStartSLOduration=95.287153259 podStartE2EDuration="1m35.287153259s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.286613104 +0000 UTC m=+118.376067617" watchObservedRunningTime="2026-02-24 00:10:51.287153259 +0000 UTC m=+118.376607772" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.297228 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.339741 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6z58r" podStartSLOduration=95.33971961 podStartE2EDuration="1m35.33971961s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.312469818 +0000 UTC m=+118.401924331" watchObservedRunningTime="2026-02-24 00:10:51.33971961 +0000 UTC m=+118.429174133" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.371663 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.375140 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.875024148 +0000 UTC m=+118.964478661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.393945 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-spmnw" podStartSLOduration=95.393929456 podStartE2EDuration="1m35.393929456s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.341398287 +0000 UTC m=+118.430852800" watchObservedRunningTime="2026-02-24 00:10:51.393929456 +0000 UTC m=+118.483383959" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.413057 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.414448 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" podStartSLOduration=96.41442675 podStartE2EDuration="1m36.41442675s" podCreationTimestamp="2026-02-24 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.395311335 +0000 UTC m=+118.484765858" watchObservedRunningTime="2026-02-24 00:10:51.41442675 +0000 UTC m=+118.503881263" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.441882 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" podStartSLOduration=95.441867298 podStartE2EDuration="1m35.441867298s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.439987775 +0000 UTC m=+118.529442308" watchObservedRunningTime="2026-02-24 00:10:51.441867298 +0000 UTC m=+118.531321811" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.467153 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-4vqwn" podStartSLOduration=95.467134904 podStartE2EDuration="1m35.467134904s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:51.462194576 +0000 UTC m=+118.551649099" watchObservedRunningTime="2026-02-24 00:10:51.467134904 +0000 UTC m=+118.556589417" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.476326 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.476640 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:51.97662765 +0000 UTC m=+119.066082163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.577544 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.577845 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.077830121 +0000 UTC m=+119.167284634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.679560 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.679908 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.179895817 +0000 UTC m=+119.269350330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.688619 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51474: no serving certificate available for the kubelet" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.781657 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.781920 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.281905711 +0000 UTC m=+119.371360224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.791525 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51482: no serving certificate available for the kubelet" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.851556 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51486: no serving certificate available for the kubelet" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.885894 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.886225 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.386212099 +0000 UTC m=+119.475666602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.952845 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.962287 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51500: no serving certificate available for the kubelet" Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.986770 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.986897 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.486878285 +0000 UTC m=+119.576332798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:51 crc kubenswrapper[5122]: I0224 00:10:51.987117 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:51 crc kubenswrapper[5122]: E0224 00:10:51.987367 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.487360378 +0000 UTC m=+119.576814891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.064415 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51502: no serving certificate available for the kubelet" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.088377 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.088520 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.588488098 +0000 UTC m=+119.677942611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.088650 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.088956 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.58894411 +0000 UTC m=+119.678398623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.113030 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:52 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:52 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:52 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.113104 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.124920 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"576077d5-2344-4315-b225-efadb4e914ed","Type":"ContainerStarted","Data":"7d840eca57917dcc9fbf9db5a88db667c58de37c95f6e8c8a4217acb31bcb177"} Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.190242 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.190639 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.690623505 +0000 UTC m=+119.780078018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.236113 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51512: no serving certificate available for the kubelet" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.292442 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.292765 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.792752161 +0000 UTC m=+119.882206674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.398551 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.399315 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:52.899267151 +0000 UTC m=+119.988721664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.399713 5122 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5xl2l container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.399789 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.400338 5122 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-q8fpc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.400405 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" podUID="10e3bdb7-6f23-4553-8536-bf73e0b2a45c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.405666 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-2jgbb" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.449342 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-jq4c4" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.455923 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51518: no serving certificate available for the kubelet" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.501012 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.507955 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.007937091 +0000 UTC m=+120.097391604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.572458 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.594260 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-rdpqq" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.603444 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.603834 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.103818394 +0000 UTC m=+120.193272907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.710138 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.711423 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.211410824 +0000 UTC m=+120.300865337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.778029 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.811443 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.811926 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.311908205 +0000 UTC m=+120.401362708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.885915 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51532: no serving certificate available for the kubelet" Feb 24 00:10:52 crc kubenswrapper[5122]: I0224 00:10:52.913411 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:52 crc kubenswrapper[5122]: E0224 00:10:52.913922 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.413904889 +0000 UTC m=+120.503359402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.015222 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.015239 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.515202213 +0000 UTC m=+120.604656726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.015761 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.016405 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.516383826 +0000 UTC m=+120.605838339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.105393 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.115173 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:53 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:53 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:53 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.115523 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.116438 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.116694 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.616647261 +0000 UTC m=+120.706101764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.116929 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.117436 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.617428323 +0000 UTC m=+120.706882836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.139264 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.153588 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.156960 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.176269 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"576077d5-2344-4315-b225-efadb4e914ed","Type":"ContainerStarted","Data":"e944d494d601f267916886b660723a463b1db4791c47eedc159860a9e048fc3b"} Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.189675 5122 generic.go:358] "Generic (PLEG): container finished" podID="91922081-9786-47ef-ad37-7d1092f63918" containerID="2c6af9c07eef5e92014d019f0e749901d4ad7e3400c2ddc9334346e20b28d3cc" exitCode=0 Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.191000 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" event={"ID":"91922081-9786-47ef-ad37-7d1092f63918","Type":"ContainerDied","Data":"2c6af9c07eef5e92014d019f0e749901d4ad7e3400c2ddc9334346e20b28d3cc"} Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.218555 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.218732 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgkd\" (UniqueName: \"kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.218775 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.218815 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.218943 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.718923462 +0000 UTC m=+120.808377975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.282231 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.319602 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.319690 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.319730 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.320354 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.320371 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtgkd\" (UniqueName: \"kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.321564 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.821551293 +0000 UTC m=+120.911005806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.329357 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.332144 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.332308 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.336328 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.360006 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtgkd\" (UniqueName: \"kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd\") pod \"community-operators-lvn26\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.421886 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.422127 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.422192 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.922147418 +0000 UTC m=+121.011601931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.422381 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.422661 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.422974 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9pfc\" (UniqueName: \"kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.423090 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:53.923081384 +0000 UTC m=+121.012535897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.441969 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.441953652 podStartE2EDuration="2.441953652s" podCreationTimestamp="2026-02-24 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:53.441641313 +0000 UTC m=+120.531095826" watchObservedRunningTime="2026-02-24 00:10:53.441953652 +0000 UTC m=+120.531408165" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.468033 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.494723 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.531026 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.534597 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.534801 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9pfc\" (UniqueName: \"kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.534853 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.535388 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.535507 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.535605 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.035586791 +0000 UTC m=+121.125041304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.543722 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.544091 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.562834 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9pfc\" (UniqueName: \"kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc\") pod \"certified-operators-d5844\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.636060 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8vs\" (UniqueName: \"kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.636152 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.636236 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.636262 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.636573 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.136561586 +0000 UTC m=+121.226016099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.638485 5122 ???:1] "http: TLS handshake error from 192.168.126.11:51542: no serving certificate available for the kubelet" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.653092 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-gcvhv" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.674989 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.680759 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.735225 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.742419 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.742624 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sc8vs\" (UniqueName: \"kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.742692 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.742747 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.743598 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.24357645 +0000 UTC m=+121.333030963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.743676 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.743893 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.770194 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.770386 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.800023 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc8vs\" (UniqueName: \"kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs\") pod \"community-operators-bpmsz\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.848625 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.848684 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.850905 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.851292 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zj2\" (UniqueName: \"kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.851363 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.351344205 +0000 UTC m=+121.440798718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.884396 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.893617 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.952488 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.952702 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.952773 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-84zj2\" (UniqueName: \"kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.952803 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.953257 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: E0224 00:10:53.953322 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.453306187 +0000 UTC m=+121.542760700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.953533 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:53 crc kubenswrapper[5122]: I0224 00:10:53.996119 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-84zj2\" (UniqueName: \"kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2\") pod \"certified-operators-nq2x2\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.034556 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.053734 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.054115 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.554064746 +0000 UTC m=+121.643519259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.120120 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:54 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:54 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:54 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.120430 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.122240 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.154521 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.154584 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.654564528 +0000 UTC m=+121.744019041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.154759 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.155013 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.65500625 +0000 UTC m=+121.744460763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.193749 5122 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-q8fpc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.193822 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" podUID="10e3bdb7-6f23-4553-8536-bf73e0b2a45c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.222869 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerStarted","Data":"6f32d0cf4a7eb7ff8deb7cef5a1a0fd0bc2edde911438e9d51f9c6d08585d6e8"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.224552 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" event={"ID":"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04","Type":"ContainerStarted","Data":"75c88fe5ca584870a1f3ac6aca788b697556eed36cd587caa670b2628625f8f0"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.225673 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerStarted","Data":"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.225717 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerStarted","Data":"7cc08e080ecdeda146d960c0a0ea4f3a29f78db047bf524fbea1db3c808401ec"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.226874 5122 generic.go:358] "Generic (PLEG): container finished" podID="576077d5-2344-4315-b225-efadb4e914ed" containerID="e944d494d601f267916886b660723a463b1db4791c47eedc159860a9e048fc3b" exitCode=0 Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.226958 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"576077d5-2344-4315-b225-efadb4e914ed","Type":"ContainerDied","Data":"e944d494d601f267916886b660723a463b1db4791c47eedc159860a9e048fc3b"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.229843 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.231540 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4f05afbe3b3aa2acbf7bb698b08b183431cb39128be15abf2ee678640de1a2f9"} Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.253647 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.259805 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.260208 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.760190393 +0000 UTC m=+121.849644906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.275423 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.361041 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.361460 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.861446686 +0000 UTC m=+121.950901199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.445993 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.445975851 podStartE2EDuration="28.445975851s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:54.292170858 +0000 UTC m=+121.381625371" watchObservedRunningTime="2026-02-24 00:10:54.445975851 +0000 UTC m=+121.535430364" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.446443 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.462740 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.463355 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:54.963338316 +0000 UTC m=+122.052792829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: W0224 00:10:54.483695 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb28573a3_2eb9_41dd_8eb5_7a4f9b677028.slice/crio-60cd59977e44e7ad534cc221ed683fd8ec4e62d8c7e46359b4d5405291776990 WatchSource:0}: Error finding container 60cd59977e44e7ad534cc221ed683fd8ec4e62d8c7e46359b4d5405291776990: Status 404 returned error can't find the container with id 60cd59977e44e7ad534cc221ed683fd8ec4e62d8c7e46359b4d5405291776990 Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.564421 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.564892 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.064876087 +0000 UTC m=+122.154330600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.581081 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.668644 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.668698 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") pod \"91922081-9786-47ef-ad37-7d1092f63918\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.668891 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") pod \"91922081-9786-47ef-ad37-7d1092f63918\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.668937 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zlr7\" (UniqueName: \"kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7\") pod \"91922081-9786-47ef-ad37-7d1092f63918\" (UID: \"91922081-9786-47ef-ad37-7d1092f63918\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.673260 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.173232488 +0000 UTC m=+122.262687001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.674541 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume" (OuterVolumeSpecName: "config-volume") pod "91922081-9786-47ef-ad37-7d1092f63918" (UID: "91922081-9786-47ef-ad37-7d1092f63918"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.678349 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91922081-9786-47ef-ad37-7d1092f63918" (UID: "91922081-9786-47ef-ad37-7d1092f63918"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.679316 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7" (OuterVolumeSpecName: "kube-api-access-5zlr7") pod "91922081-9786-47ef-ad37-7d1092f63918" (UID: "91922081-9786-47ef-ad37-7d1092f63918"). InnerVolumeSpecName "kube-api-access-5zlr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.770873 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.771132 5122 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91922081-9786-47ef-ad37-7d1092f63918-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.771153 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zlr7\" (UniqueName: \"kubernetes.io/projected/91922081-9786-47ef-ad37-7d1092f63918-kube-api-access-5zlr7\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.771169 5122 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91922081-9786-47ef-ad37-7d1092f63918-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.771463 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.271447586 +0000 UTC m=+122.360902099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.872895 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.873067 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.373041348 +0000 UTC m=+122.462495851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.873215 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.873787 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.373770529 +0000 UTC m=+122.463225042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.885552 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.885683 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.891846 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.894153 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.894208 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.974874 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.975049 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.475022991 +0000 UTC m=+122.564477504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.975784 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:54 crc kubenswrapper[5122]: E0224 00:10:54.976295 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.476287327 +0000 UTC m=+122.565741840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:54 crc kubenswrapper[5122]: I0224 00:10:54.996605 5122 ???:1] "http: TLS handshake error from 192.168.126.11:59716: no serving certificate available for the kubelet" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.077507 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.077657 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.577632952 +0000 UTC m=+122.667087465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.078163 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.078447 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.578440785 +0000 UTC m=+122.667895298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.089648 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.090313 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91922081-9786-47ef-ad37-7d1092f63918" containerName="collect-profiles" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.090333 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="91922081-9786-47ef-ad37-7d1092f63918" containerName="collect-profiles" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.090435 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="91922081-9786-47ef-ad37-7d1092f63918" containerName="collect-profiles" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.127343 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:55 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:55 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:55 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.127441 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.142795 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.142838 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.143410 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.149839 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.179439 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.179889 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.679871042 +0000 UTC m=+122.769325555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.203022 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.203053 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.215964 5122 patch_prober.go:28] interesting pod/console-64d44f6ddf-7fw77 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.216017 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7fw77" podUID="0f2d24c5-cbfa-410d-8105-d67830202ff1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.281815 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.282136 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.282251 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.282343 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdwq\" (UniqueName: \"kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.282984 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.782968647 +0000 UTC m=+122.872423160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.286776 5122 generic.go:358] "Generic (PLEG): container finished" podID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerID="757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca" exitCode=0 Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.286903 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerDied","Data":"757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.307382 5122 generic.go:358] "Generic (PLEG): container finished" podID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerID="5ef11ad801daaf2a03ba4da307a2c5594e8b70c8262e4c29b2b730e1cfec63e9" exitCode=0 Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.307678 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerDied","Data":"5ef11ad801daaf2a03ba4da307a2c5594e8b70c8262e4c29b2b730e1cfec63e9"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.307704 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerStarted","Data":"1d4c041e45aaef05c715936d7cb8d604b460dea7a3e54a3def867336db5032f6"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.315787 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" event={"ID":"91922081-9786-47ef-ad37-7d1092f63918","Type":"ContainerDied","Data":"c22576154f6f56e5ba0f613296275eb21d341bdf5271f9ad35ef4c2bb1456d2c"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.315845 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c22576154f6f56e5ba0f613296275eb21d341bdf5271f9ad35ef4c2bb1456d2c" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.315991 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-j8d8q" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.329827 5122 generic.go:358] "Generic (PLEG): container finished" podID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerID="645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e" exitCode=0 Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.329919 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerDied","Data":"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.344792 5122 generic.go:358] "Generic (PLEG): container finished" podID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerID="9a87e704005b97403dd44dfdb6f5d26fe37d7539b7183b390d44e3d76a60b4ec" exitCode=0 Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.344994 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerDied","Data":"9a87e704005b97403dd44dfdb6f5d26fe37d7539b7183b390d44e3d76a60b4ec"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.345023 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerStarted","Data":"60cd59977e44e7ad534cc221ed683fd8ec4e62d8c7e46359b4d5405291776990"} Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.356639 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-zn588" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.384238 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.384445 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdwq\" (UniqueName: \"kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.384518 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.384643 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.385138 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.385213 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.885197566 +0000 UTC m=+122.974652079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.386583 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.413910 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdwq\" (UniqueName: \"kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq\") pod \"redhat-marketplace-cpk76\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.473507 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.475862 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.489249 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.490995 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:55.990982186 +0000 UTC m=+123.080436699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.555273 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.555445 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.595567 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.596032 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.096018114 +0000 UTC m=+123.185472627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.697709 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.697774 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9bb\" (UniqueName: \"kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.697852 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.697880 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.698191 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.198179903 +0000 UTC m=+123.287634416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.796665 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.803626 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.803836 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9bb\" (UniqueName: \"kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.803921 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.803955 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.804404 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.804470 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.304454206 +0000 UTC m=+123.393908709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.804970 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.825422 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9bb\" (UniqueName: \"kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb\") pod \"redhat-marketplace-s6zx7\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.878783 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.917868 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access\") pod \"576077d5-2344-4315-b225-efadb4e914ed\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.917907 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir\") pod \"576077d5-2344-4315-b225-efadb4e914ed\" (UID: \"576077d5-2344-4315-b225-efadb4e914ed\") " Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.918162 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:55 crc kubenswrapper[5122]: E0224 00:10:55.918503 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.418490615 +0000 UTC m=+123.507945128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.918630 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "576077d5-2344-4315-b225-efadb4e914ed" (UID: "576077d5-2344-4315-b225-efadb4e914ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.928612 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "576077d5-2344-4315-b225-efadb4e914ed" (UID: "576077d5-2344-4315-b225-efadb4e914ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:10:55 crc kubenswrapper[5122]: I0224 00:10:55.962900 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.019817 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.020385 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/576077d5-2344-4315-b225-efadb4e914ed-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.020410 5122 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/576077d5-2344-4315-b225-efadb4e914ed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.020511 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.520482488 +0000 UTC m=+123.609937001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.114763 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:56 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:56 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:56 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.114850 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.121326 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.122265 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.622243335 +0000 UTC m=+123.711697928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.162637 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.222300 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.222583 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.722563782 +0000 UTC m=+123.812018295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.222716 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.222960 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.722953473 +0000 UTC m=+123.812407986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.262445 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.264356 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="576077d5-2344-4315-b225-efadb4e914ed" containerName="pruner" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.264427 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="576077d5-2344-4315-b225-efadb4e914ed" containerName="pruner" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.265520 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="576077d5-2344-4315-b225-efadb4e914ed" containerName="pruner" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.324830 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.325118 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.825099301 +0000 UTC m=+123.914553804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.325387 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.325751 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.825743439 +0000 UTC m=+123.915197952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.426626 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.426818 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.926788385 +0000 UTC m=+124.016242898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.427007 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.427400 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:56.927383392 +0000 UTC m=+124.016837905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428398 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"576077d5-2344-4315-b225-efadb4e914ed","Type":"ContainerDied","Data":"7d840eca57917dcc9fbf9db5a88db667c58de37c95f6e8c8a4217acb31bcb177"} Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428434 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d840eca57917dcc9fbf9db5a88db667c58de37c95f6e8c8a4217acb31bcb177" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428446 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428459 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerStarted","Data":"01abcee582b4017f627f1892ecd7eb02143367a64dbf3d85a3139b38d715a006"} Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428470 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerStarted","Data":"2694aaddad634dd5dd2c013a7d297be14574fe9f69fcc8822c09ea0e5fcf19a0"} Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.428519 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.429038 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.433471 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.523007 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.527580 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.527686 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.027662277 +0000 UTC m=+124.117116810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.528005 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.528153 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.528326 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.528379 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86xmh\" (UniqueName: \"kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.529583 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.029572001 +0000 UTC m=+124.119026514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.629266 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.629485 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.629603 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86xmh\" (UniqueName: \"kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.629638 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.630042 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.630133 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.130117634 +0000 UTC m=+124.219572147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.630333 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.648912 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86xmh\" (UniqueName: \"kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh\") pod \"redhat-operators-4rhfb\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.731354 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.731677 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.231663705 +0000 UTC m=+124.321118218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.762577 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.837756 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.838320 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.338299728 +0000 UTC m=+124.427754241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.886168 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.886207 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.886228 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.895581 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.896166 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:10:56 crc kubenswrapper[5122]: I0224 00:10:56.939418 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:56 crc kubenswrapper[5122]: E0224 00:10:56.939744 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.439730306 +0000 UTC m=+124.529184819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.022726 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.022949 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.041132 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.041291 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.041315 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.041552 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.541536194 +0000 UTC m=+124.630990707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.056772 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:10:57 crc kubenswrapper[5122]: W0224 00:10:57.066723 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ddb4692_b755_4e0e_8c84_3e3c0440c3e8.slice/crio-2b283d05f4c57ff60a600de34928d5039f7718e6d69af5aa666ca840201eeaeb WatchSource:0}: Error finding container 2b283d05f4c57ff60a600de34928d5039f7718e6d69af5aa666ca840201eeaeb: Status 404 returned error can't find the container with id 2b283d05f4c57ff60a600de34928d5039f7718e6d69af5aa666ca840201eeaeb Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.119463 5122 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xtm2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:10:57 crc kubenswrapper[5122]: [-]has-synced failed: reason withheld Feb 24 00:10:57 crc kubenswrapper[5122]: [+]process-running ok Feb 24 00:10:57 crc kubenswrapper[5122]: healthz check failed Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.119721 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" podUID="fc07aacc-6c08-4ef3-a058-b6a823315eec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.142837 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.142896 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.143002 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.143100 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.143124 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.143205 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.643193338 +0000 UTC m=+124.732647851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.143227 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrptj\" (UniqueName: \"kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.143431 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.175907 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.246801 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.246974 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.74694054 +0000 UTC m=+124.836395063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.247105 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.247241 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrptj\" (UniqueName: \"kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.247633 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.247825 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.248203 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.248232 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.748223046 +0000 UTC m=+124.837677559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.248911 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.262798 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.268857 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrptj\" (UniqueName: \"kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj\") pod \"redhat-operators-pjd62\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.342431 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.350616 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.351225 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.851204477 +0000 UTC m=+124.940658990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.383238 5122 generic.go:358] "Generic (PLEG): container finished" podID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerID="fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da" exitCode=0 Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.383308 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerDied","Data":"fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da"} Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.385029 5122 generic.go:358] "Generic (PLEG): container finished" podID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerID="b468ab622d33f6b7278d88cfaf23aff3b0b625d5c3be038465667e7ec2bb0de2" exitCode=0 Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.385113 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerDied","Data":"b468ab622d33f6b7278d88cfaf23aff3b0b625d5c3be038465667e7ec2bb0de2"} Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.386583 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerStarted","Data":"2b283d05f4c57ff60a600de34928d5039f7718e6d69af5aa666ca840201eeaeb"} Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.453064 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.453384 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:57.953372795 +0000 UTC m=+125.042827308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.463901 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 24 00:10:57 crc kubenswrapper[5122]: W0224 00:10:57.493979 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0870c79c_2d61_4f10_9269_7477e84e7b9d.slice/crio-9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1 WatchSource:0}: Error finding container 9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1: Status 404 returned error can't find the container with id 9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1 Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.553663 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.554808 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.054793013 +0000 UTC m=+125.144247526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.590635 5122 ???:1] "http: TLS handshake error from 192.168.126.11:59726: no serving certificate available for the kubelet" Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.593840 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:10:57 crc kubenswrapper[5122]: W0224 00:10:57.604916 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9e21a16_c724_46be_8e7c_c8987db90f7b.slice/crio-18ed4484a29636e64fe788527a21f069608863e74fa0af5bdec8a62417f40867 WatchSource:0}: Error finding container 18ed4484a29636e64fe788527a21f069608863e74fa0af5bdec8a62417f40867: Status 404 returned error can't find the container with id 18ed4484a29636e64fe788527a21f069608863e74fa0af5bdec8a62417f40867 Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.657235 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.657686 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.157669351 +0000 UTC m=+125.247123864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.760382 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.260324033 +0000 UTC m=+125.349778546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.761716 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.762213 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.762693 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.262683519 +0000 UTC m=+125.352138032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.864350 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.864573 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.364539958 +0000 UTC m=+125.453994481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.864725 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.865255 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.365242628 +0000 UTC m=+125.454697141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.959626 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:10:57 crc kubenswrapper[5122]: I0224 00:10:57.966334 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.966895 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.466850671 +0000 UTC m=+125.556305184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.966959 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.969577 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:10:57 crc kubenswrapper[5122]: E0224 00:10:57.969653 5122 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.068185 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.068541 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.568528145 +0000 UTC m=+125.657982658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.113853 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.118231 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-xtm2m" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.169918 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.170127 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.670096817 +0000 UTC m=+125.759551900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.170458 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.170771 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.670762785 +0000 UTC m=+125.760217298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.272213 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.272715 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.772665736 +0000 UTC m=+125.862120249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.272821 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.273526 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.7735188 +0000 UTC m=+125.862973313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.373899 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.374370 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.874353661 +0000 UTC m=+125.963808174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.396675 5122 generic.go:358] "Generic (PLEG): container finished" podID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerID="aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425" exitCode=0 Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.397898 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerDied","Data":"aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425"} Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.399521 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0870c79c-2d61-4f10-9269-7477e84e7b9d","Type":"ContainerStarted","Data":"9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1"} Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.413644 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerStarted","Data":"18ed4484a29636e64fe788527a21f069608863e74fa0af5bdec8a62417f40867"} Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.476085 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.476929 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:58.97691061 +0000 UTC m=+126.066365123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.577082 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.577473 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.077150165 +0000 UTC m=+126.166604678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.578016 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.578181 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.578289 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.578332 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.578364 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.579224 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.079190382 +0000 UTC m=+126.168644895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.585039 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.585477 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.585845 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.591447 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.592018 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.608588 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.613841 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.625272 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.679379 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.679474 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.179454537 +0000 UTC m=+126.268909040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.680011 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.680124 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.680516 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.180503316 +0000 UTC m=+126.269957829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.681556 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.696042 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae9b0319-d6e5-4434-9036-346a520931c8-metrics-certs\") pod \"network-metrics-daemon-gwpx2\" (UID: \"ae9b0319-d6e5-4434-9036-346a520931c8\") " pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.724215 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.734237 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.742992 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.752194 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gwpx2" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.781214 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.781382 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.281355077 +0000 UTC m=+126.370809590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.781638 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.781926 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.281919193 +0000 UTC m=+126.371373706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.790013 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.884666 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.884999 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.384979566 +0000 UTC m=+126.474434079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:58 crc kubenswrapper[5122]: I0224 00:10:58.986447 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:58 crc kubenswrapper[5122]: E0224 00:10:58.986813 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.486800335 +0000 UTC m=+126.576254848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.036050 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gwpx2"] Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.090834 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.091151 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.591061852 +0000 UTC m=+126.680516355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.192716 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.193299 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.693285622 +0000 UTC m=+126.782740135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.217531 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-267zx" Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.295055 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.295606 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.795549403 +0000 UTC m=+126.885003916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.397086 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.397472 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:10:59.897457644 +0000 UTC m=+126.986912157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.441806 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"5ffc6faa9f76c8f3667717b5e93d237719c5fe4af819f228d59861f8509e19e1"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.441855 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"cb3f947954c488c08ca6e3fe2250453c15e404081817e9395890d2b07590abff"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.444277 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"d0f0ee38e682136ae4b534a9a0ecc134198a639730c9ef54276bc4dc27849e51"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.470850 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gwpx2" event={"ID":"ae9b0319-d6e5-4434-9036-346a520931c8","Type":"ContainerStarted","Data":"d316632a1ca571252f4de809e13f66db87152c8aa67b1c1859fb458457316202"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.476300 5122 generic.go:358] "Generic (PLEG): container finished" podID="0870c79c-2d61-4f10-9269-7477e84e7b9d" containerID="e0199a110629f182840f0f5a78b7741521d3efb8bff294a49b768a78b820033c" exitCode=0 Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.476867 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0870c79c-2d61-4f10-9269-7477e84e7b9d","Type":"ContainerDied","Data":"e0199a110629f182840f0f5a78b7741521d3efb8bff294a49b768a78b820033c"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.490889 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"4bfec94818b465dabc4b3de67e5dc6d8f880d3c3b59ca0a546daa96b822397d3"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.505268 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.505503 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.005474135 +0000 UTC m=+127.094928648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.505596 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.506340 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.006331659 +0000 UTC m=+127.095786162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.523646 5122 generic.go:358] "Generic (PLEG): container finished" podID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerID="27aafd5a2abb0483a73c9dc4f241f8bebfa263753e1b35543e146e7994a73775" exitCode=0 Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.523999 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerDied","Data":"27aafd5a2abb0483a73c9dc4f241f8bebfa263753e1b35543e146e7994a73775"} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.606553 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.606696 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.106655995 +0000 UTC m=+127.196110508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.606962 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.607912 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.10788205 +0000 UTC m=+127.197336563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.726280 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.726729 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.226699174 +0000 UTC m=+127.316153687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.727196 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.727589 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.227581528 +0000 UTC m=+127.317036031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-mkt9k" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.828355 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:10:59 crc kubenswrapper[5122]: E0224 00:10:59.828898 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-24 00:11:00.328867362 +0000 UTC m=+127.418321875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.865045 5122 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.902973 5122 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T00:10:59.865095876Z","UUID":"0ee3cfe4-a75c-4027-b9a0-233e0f7c7bbe","Handler":null,"Name":"","Endpoint":""} Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.906469 5122 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.906512 5122 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.932789 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.940694 5122 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:10:59 crc kubenswrapper[5122]: I0224 00:10:59.940739 5122 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.020245 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-mkt9k\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.039435 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.045109 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.235502 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.244568 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.539620 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gwpx2" event={"ID":"ae9b0319-d6e5-4434-9036-346a520931c8","Type":"ContainerStarted","Data":"9fb0ed4b137e0b6c9c9de86afbeeb24e423fff87e25b0b6bb3b27d2bd677b7e4"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.539674 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gwpx2" event={"ID":"ae9b0319-d6e5-4434-9036-346a520931c8","Type":"ContainerStarted","Data":"a4a846dfee8613b96d5be206af1c67a721d5bbf92bfe3ee180ab8e4620d8f4f2"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.542023 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"32aee17a7ec887daaf5241cd2ebb25d1f68e8293996dd6ca336ac5d356d7d7b7"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.546617 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" event={"ID":"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04","Type":"ContainerStarted","Data":"9dc703f69385f7493a4f3aa629669acb407830ad5a5132ab7ff422e4da05650d"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.546660 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" event={"ID":"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04","Type":"ContainerStarted","Data":"bfdbac6e0e5b3e95181bd6a1f49878585f047e668608e1e5e08cf889d5e23098"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.549397 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"eac021db4f9aa5c5fffd0de60ba7db5e8a63762c14ff3c157e85418e0526d35b"} Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.555621 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gwpx2" podStartSLOduration=104.555530821 podStartE2EDuration="1m44.555530821s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:00.55367369 +0000 UTC m=+127.643128223" watchObservedRunningTime="2026-02-24 00:11:00.555530821 +0000 UTC m=+127.644985344" Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.717338 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:11:00 crc kubenswrapper[5122]: I0224 00:11:00.967622 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.068342 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access\") pod \"0870c79c-2d61-4f10-9269-7477e84e7b9d\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.068852 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir\") pod \"0870c79c-2d61-4f10-9269-7477e84e7b9d\" (UID: \"0870c79c-2d61-4f10-9269-7477e84e7b9d\") " Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.069113 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0870c79c-2d61-4f10-9269-7477e84e7b9d" (UID: "0870c79c-2d61-4f10-9269-7477e84e7b9d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.075595 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0870c79c-2d61-4f10-9269-7477e84e7b9d" (UID: "0870c79c-2d61-4f10-9269-7477e84e7b9d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.123285 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.123544 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.170209 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0870c79c-2d61-4f10-9269-7477e84e7b9d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.170248 5122 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0870c79c-2d61-4f10-9269-7477e84e7b9d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.564316 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.564319 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0870c79c-2d61-4f10-9269-7477e84e7b9d","Type":"ContainerDied","Data":"9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1"} Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.564381 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad50ba87f2108cbf78d44839e6f31857c4ddbf9c72f64c38e4e2d4ecbfd49e1" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.569945 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" event={"ID":"c246391f-7d72-44c4-be1e-d9c37480d022","Type":"ContainerStarted","Data":"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517"} Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.569978 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" event={"ID":"c246391f-7d72-44c4-be1e-d9c37480d022","Type":"ContainerStarted","Data":"22309b2113a441f18971da291719bfbb9791a627f05cd1e605315188de831ef1"} Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.570005 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.579137 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" event={"ID":"1d7b77dd-f3cb-474b-8db4-4a6f9af07a04","Type":"ContainerStarted","Data":"19a4c18fc23be9663c07e870518aa7cf104e6d8b761fe751db5c9f3ce7c043a0"} Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.579206 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.593954 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" podStartSLOduration=105.593911522 podStartE2EDuration="1m45.593911522s" podCreationTimestamp="2026-02-24 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:01.588008096 +0000 UTC m=+128.677462629" watchObservedRunningTime="2026-02-24 00:11:01.593911522 +0000 UTC m=+128.683366035" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.611537 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5jfb2" podStartSLOduration=19.611519244 podStartE2EDuration="19.611519244s" podCreationTimestamp="2026-02-24 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:01.60851646 +0000 UTC m=+128.697971003" watchObservedRunningTime="2026-02-24 00:11:01.611519244 +0000 UTC m=+128.700973757" Feb 24 00:11:01 crc kubenswrapper[5122]: I0224 00:11:01.790265 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 24 00:11:02 crc kubenswrapper[5122]: I0224 00:11:02.403537 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:11:02 crc kubenswrapper[5122]: I0224 00:11:02.749688 5122 ???:1] "http: TLS handshake error from 192.168.126.11:59728: no serving certificate available for the kubelet" Feb 24 00:11:03 crc kubenswrapper[5122]: I0224 00:11:03.197711 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-q8fpc" Feb 24 00:11:04 crc kubenswrapper[5122]: I0224 00:11:04.893449 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:11:04 crc kubenswrapper[5122]: I0224 00:11:04.893748 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[5122]: I0224 00:11:05.207943 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:11:05 crc kubenswrapper[5122]: I0224 00:11:05.212347 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7fw77" Feb 24 00:11:05 crc kubenswrapper[5122]: I0224 00:11:05.350403 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:07 crc kubenswrapper[5122]: E0224 00:11:07.956123 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:07 crc kubenswrapper[5122]: E0224 00:11:07.959333 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:07 crc kubenswrapper[5122]: E0224 00:11:07.961632 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:07 crc kubenswrapper[5122]: E0224 00:11:07.961737 5122 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 24 00:11:08 crc kubenswrapper[5122]: I0224 00:11:08.282947 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:11:11 crc kubenswrapper[5122]: I0224 00:11:11.121983 5122 patch_prober.go:28] interesting pod/downloads-747b44746d-m6v2b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 24 00:11:11 crc kubenswrapper[5122]: I0224 00:11:11.122182 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-m6v2b" podUID="a1d4f5ca-fa1f-4af4-acf0-23a11d82c0e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 24 00:11:13 crc kubenswrapper[5122]: I0224 00:11:13.021373 5122 ???:1] "http: TLS handshake error from 192.168.126.11:33596: no serving certificate available for the kubelet" Feb 24 00:11:17 crc kubenswrapper[5122]: E0224 00:11:17.954515 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:17 crc kubenswrapper[5122]: E0224 00:11:17.956734 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:17 crc kubenswrapper[5122]: E0224 00:11:17.958475 5122 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 00:11:17 crc kubenswrapper[5122]: E0224 00:11:17.958565 5122 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 24 00:11:21 crc kubenswrapper[5122]: I0224 00:11:21.144630 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-m6v2b" Feb 24 00:11:21 crc kubenswrapper[5122]: I0224 00:11:21.712962 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-46xbn_b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63/kube-multus-additional-cni-plugins/0.log" Feb 24 00:11:21 crc kubenswrapper[5122]: I0224 00:11:21.713020 5122 generic.go:358] "Generic (PLEG): container finished" podID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" exitCode=137 Feb 24 00:11:21 crc kubenswrapper[5122]: I0224 00:11:21.713169 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" event={"ID":"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63","Type":"ContainerDied","Data":"0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.368484 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-46xbn_b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63/kube-multus-additional-cni-plugins/0.log" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.368827 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.404860 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6ccnj" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510123 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready\") pod \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510184 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cd2g\" (UniqueName: \"kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g\") pod \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510214 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") pod \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510310 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir\") pod \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\" (UID: \"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63\") " Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510495 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" (UID: "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510676 5122 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.510723 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready" (OuterVolumeSpecName: "ready") pod "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" (UID: "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.511035 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" (UID: "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.519926 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g" (OuterVolumeSpecName: "kube-api-access-8cd2g") pod "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" (UID: "b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63"). InnerVolumeSpecName "kube-api-access-8cd2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.600268 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.612282 5122 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-ready\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.612306 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8cd2g\" (UniqueName: \"kubernetes.io/projected/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-kube-api-access-8cd2g\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.612317 5122 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.721473 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerStarted","Data":"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.722661 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-46xbn_b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63/kube-multus-additional-cni-plugins/0.log" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.722807 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" event={"ID":"b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63","Type":"ContainerDied","Data":"d25400b7c36110aa3197a9b5d33a6fd737f173463274c994bcf8b382ae54706b"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.722852 5122 scope.go:117] "RemoveContainer" containerID="0c4198c3e9dfc7b6b508403967403c0325724c11cd467c4435d0d8e4583c07bb" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.722955 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-46xbn" Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.725555 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerStarted","Data":"7915b4fe8b18be5e9b8d72d7f7b4431b7514a76803a29c976013a92af82e6984"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.742375 5122 generic.go:358] "Generic (PLEG): container finished" podID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerID="4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4" exitCode=0 Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.742484 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerDied","Data":"4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.755901 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerStarted","Data":"26ca68a4375c712801d8269e9beb936af1304fc84db0c92cf6f5c9706dcc55ab"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.765339 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerStarted","Data":"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.768206 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerStarted","Data":"c4861903e11616effb8456ac4de0d38ac536642b3103c17be78642ef830ddd29"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.770813 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerStarted","Data":"76d8d1140e59ff2ec26ded58fbb5fcbff2c2d0c8be2d7d03ad5b9436203bb4ce"} Feb 24 00:11:22 crc kubenswrapper[5122]: I0224 00:11:22.773461 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerStarted","Data":"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.109151 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46xbn"] Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.115517 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-46xbn"] Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.780233 5122 generic.go:358] "Generic (PLEG): container finished" podID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerID="76d8d1140e59ff2ec26ded58fbb5fcbff2c2d0c8be2d7d03ad5b9436203bb4ce" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.782175 5122 generic.go:358] "Generic (PLEG): container finished" podID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerID="a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.783715 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" path="/var/lib/kubelet/pods/b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63/volumes" Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.784134 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerDied","Data":"76d8d1140e59ff2ec26ded58fbb5fcbff2c2d0c8be2d7d03ad5b9436203bb4ce"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.784165 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerStarted","Data":"f0164600e88b951324f595a79cdef52ee93afe86659f00421e8f8dd33b47f209"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.784176 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerDied","Data":"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.784318 5122 generic.go:358] "Generic (PLEG): container finished" podID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerID="61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.784406 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerDied","Data":"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.788550 5122 generic.go:358] "Generic (PLEG): container finished" podID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerID="7915b4fe8b18be5e9b8d72d7f7b4431b7514a76803a29c976013a92af82e6984" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.788643 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerDied","Data":"7915b4fe8b18be5e9b8d72d7f7b4431b7514a76803a29c976013a92af82e6984"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.790925 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerStarted","Data":"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.792285 5122 generic.go:358] "Generic (PLEG): container finished" podID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerID="26ca68a4375c712801d8269e9beb936af1304fc84db0c92cf6f5c9706dcc55ab" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.792352 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerDied","Data":"26ca68a4375c712801d8269e9beb936af1304fc84db0c92cf6f5c9706dcc55ab"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.797521 5122 generic.go:358] "Generic (PLEG): container finished" podID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerID="3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.798297 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerDied","Data":"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.800998 5122 generic.go:358] "Generic (PLEG): container finished" podID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerID="c4861903e11616effb8456ac4de0d38ac536642b3103c17be78642ef830ddd29" exitCode=0 Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.801041 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerDied","Data":"c4861903e11616effb8456ac4de0d38ac536642b3103c17be78642ef830ddd29"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.801079 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerStarted","Data":"04ecd8b705cc24867fefe94d285b3a9b9e26991c34584a65e4ed98558683fd0c"} Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.803650 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s6zx7" podStartSLOduration=3.8147533620000003 podStartE2EDuration="28.803631607s" podCreationTimestamp="2026-02-24 00:10:55 +0000 UTC" firstStartedPulling="2026-02-24 00:10:57.386368421 +0000 UTC m=+124.475822934" lastFinishedPulling="2026-02-24 00:11:22.375246666 +0000 UTC m=+149.464701179" observedRunningTime="2026-02-24 00:11:23.799480551 +0000 UTC m=+150.888935084" watchObservedRunningTime="2026-02-24 00:11:23.803631607 +0000 UTC m=+150.893086120" Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.939299 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nq2x2" podStartSLOduration=3.961998559 podStartE2EDuration="30.939281642s" podCreationTimestamp="2026-02-24 00:10:53 +0000 UTC" firstStartedPulling="2026-02-24 00:10:55.346573436 +0000 UTC m=+122.436027949" lastFinishedPulling="2026-02-24 00:11:22.323856519 +0000 UTC m=+149.413311032" observedRunningTime="2026-02-24 00:11:23.935952449 +0000 UTC m=+151.025406962" watchObservedRunningTime="2026-02-24 00:11:23.939281642 +0000 UTC m=+151.028736155" Feb 24 00:11:23 crc kubenswrapper[5122]: I0224 00:11:23.950862 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cpk76" podStartSLOduration=3.996652161 podStartE2EDuration="28.950848386s" podCreationTimestamp="2026-02-24 00:10:55 +0000 UTC" firstStartedPulling="2026-02-24 00:10:57.38418159 +0000 UTC m=+124.473636103" lastFinishedPulling="2026-02-24 00:11:22.338377815 +0000 UTC m=+149.427832328" observedRunningTime="2026-02-24 00:11:23.950477376 +0000 UTC m=+151.039931899" watchObservedRunningTime="2026-02-24 00:11:23.950848386 +0000 UTC m=+151.040302899" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.123691 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.123750 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.818026 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerStarted","Data":"a7ea428518339d603dd70aefdd9a4afca194821a3698ffe50846ef707e0b0e7d"} Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.820389 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerStarted","Data":"fd2aaf78bf590887747bd5731b3fbcaf63f311e83487389cad08069cf91d97da"} Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.822621 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerStarted","Data":"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162"} Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.824658 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerStarted","Data":"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216"} Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.828297 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerStarted","Data":"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e"} Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.842285 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpmsz" podStartSLOduration=4.773695537 podStartE2EDuration="31.842268434s" podCreationTimestamp="2026-02-24 00:10:53 +0000 UTC" firstStartedPulling="2026-02-24 00:10:55.308417628 +0000 UTC m=+122.397872141" lastFinishedPulling="2026-02-24 00:11:22.376990525 +0000 UTC m=+149.466445038" observedRunningTime="2026-02-24 00:11:24.83926758 +0000 UTC m=+151.928722083" watchObservedRunningTime="2026-02-24 00:11:24.842268434 +0000 UTC m=+151.931722947" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.856025 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvn26" podStartSLOduration=4.848226652 podStartE2EDuration="31.856008648s" podCreationTimestamp="2026-02-24 00:10:53 +0000 UTC" firstStartedPulling="2026-02-24 00:10:55.330516647 +0000 UTC m=+122.419971160" lastFinishedPulling="2026-02-24 00:11:22.338298643 +0000 UTC m=+149.427753156" observedRunningTime="2026-02-24 00:11:24.853443816 +0000 UTC m=+151.942898339" watchObservedRunningTime="2026-02-24 00:11:24.856008648 +0000 UTC m=+151.945463161" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.875899 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d5844" podStartSLOduration=4.8398231670000005 podStartE2EDuration="31.875881684s" podCreationTimestamp="2026-02-24 00:10:53 +0000 UTC" firstStartedPulling="2026-02-24 00:10:55.287718559 +0000 UTC m=+122.377173072" lastFinishedPulling="2026-02-24 00:11:22.323777056 +0000 UTC m=+149.413231589" observedRunningTime="2026-02-24 00:11:24.872745106 +0000 UTC m=+151.962199629" watchObservedRunningTime="2026-02-24 00:11:24.875881684 +0000 UTC m=+151.965336217" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.887883 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pjd62" podStartSLOduration=6.014355603 podStartE2EDuration="28.887868439s" podCreationTimestamp="2026-02-24 00:10:56 +0000 UTC" firstStartedPulling="2026-02-24 00:10:59.524779305 +0000 UTC m=+126.614233818" lastFinishedPulling="2026-02-24 00:11:22.398292141 +0000 UTC m=+149.487746654" observedRunningTime="2026-02-24 00:11:24.887202461 +0000 UTC m=+151.976656984" watchObservedRunningTime="2026-02-24 00:11:24.887868439 +0000 UTC m=+151.977322962" Feb 24 00:11:24 crc kubenswrapper[5122]: I0224 00:11:24.907238 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4rhfb" podStartSLOduration=4.927953501 podStartE2EDuration="28.907223861s" podCreationTimestamp="2026-02-24 00:10:56 +0000 UTC" firstStartedPulling="2026-02-24 00:10:58.397756406 +0000 UTC m=+125.487210909" lastFinishedPulling="2026-02-24 00:11:22.377026756 +0000 UTC m=+149.466481269" observedRunningTime="2026-02-24 00:11:24.904601267 +0000 UTC m=+151.994055780" watchObservedRunningTime="2026-02-24 00:11:24.907223861 +0000 UTC m=+151.996678374" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.324510 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nq2x2" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="registry-server" probeResult="failure" output=< Feb 24 00:11:25 crc kubenswrapper[5122]: timeout: failed to connect service ":50051" within 1s Feb 24 00:11:25 crc kubenswrapper[5122]: > Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.474161 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.474299 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.523569 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.879565 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.881011 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:25 crc kubenswrapper[5122]: I0224 00:11:25.923646 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:26 crc kubenswrapper[5122]: I0224 00:11:26.763833 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:11:26 crc kubenswrapper[5122]: I0224 00:11:26.764118 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:11:27 crc kubenswrapper[5122]: I0224 00:11:27.343861 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:27 crc kubenswrapper[5122]: I0224 00:11:27.344178 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:27 crc kubenswrapper[5122]: I0224 00:11:27.876629 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4rhfb" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="registry-server" probeResult="failure" output=< Feb 24 00:11:27 crc kubenswrapper[5122]: timeout: failed to connect service ":50051" within 1s Feb 24 00:11:27 crc kubenswrapper[5122]: > Feb 24 00:11:28 crc kubenswrapper[5122]: I0224 00:11:28.383842 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pjd62" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="registry-server" probeResult="failure" output=< Feb 24 00:11:28 crc kubenswrapper[5122]: timeout: failed to connect service ":50051" within 1s Feb 24 00:11:28 crc kubenswrapper[5122]: > Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.315222 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316549 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316564 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316579 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0870c79c-2d61-4f10-9269-7477e84e7b9d" containerName="pruner" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316584 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0870c79c-2d61-4f10-9269-7477e84e7b9d" containerName="pruner" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316695 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="0870c79c-2d61-4f10-9269-7477e84e7b9d" containerName="pruner" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.316709 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b94322c2-8f9b-4719-bbe1-e4fb8a1b9d63" containerName="kube-multus-additional-cni-plugins" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.321027 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.322741 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.322972 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.331120 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.444023 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.444133 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.546131 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.546267 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.546290 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.567058 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.599406 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.647543 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.837736 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 24 00:11:32 crc kubenswrapper[5122]: W0224 00:11:32.845013 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod07656290_2175_4bf8_a5f3_cb4ffade894f.slice/crio-1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908 WatchSource:0}: Error finding container 1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908: Status 404 returned error can't find the container with id 1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908 Feb 24 00:11:32 crc kubenswrapper[5122]: I0224 00:11:32.869887 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"07656290-2175-4bf8-a5f3-cb4ffade894f","Type":"ContainerStarted","Data":"1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908"} Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.468441 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.468891 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.513167 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.522330 5122 ???:1] "http: TLS handshake error from 192.168.126.11:48086: no serving certificate available for the kubelet" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.681655 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.681709 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.712848 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.874230 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"07656290-2175-4bf8-a5f3-cb4ffade894f","Type":"ContainerStarted","Data":"fb4d7dddea6c2c1212fcdbbec49daee834a925287ce33bc8142229b94156e4ee"} Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.885149 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.885979 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.888873 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=1.8888521520000001 podStartE2EDuration="1.888852152s" podCreationTimestamp="2026-02-24 00:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:33.886708542 +0000 UTC m=+160.976163075" watchObservedRunningTime="2026-02-24 00:11:33.888852152 +0000 UTC m=+160.978306675" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.912134 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.920708 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:33 crc kubenswrapper[5122]: I0224 00:11:33.926663 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:11:34 crc kubenswrapper[5122]: I0224 00:11:34.170239 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:34 crc kubenswrapper[5122]: I0224 00:11:34.226015 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:34 crc kubenswrapper[5122]: I0224 00:11:34.886326 5122 generic.go:358] "Generic (PLEG): container finished" podID="07656290-2175-4bf8-a5f3-cb4ffade894f" containerID="fb4d7dddea6c2c1212fcdbbec49daee834a925287ce33bc8142229b94156e4ee" exitCode=0 Feb 24 00:11:34 crc kubenswrapper[5122]: I0224 00:11:34.886414 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"07656290-2175-4bf8-a5f3-cb4ffade894f","Type":"ContainerDied","Data":"fb4d7dddea6c2c1212fcdbbec49daee834a925287ce33bc8142229b94156e4ee"} Feb 24 00:11:34 crc kubenswrapper[5122]: I0224 00:11:34.948459 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.138755 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.294551 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir\") pod \"07656290-2175-4bf8-a5f3-cb4ffade894f\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.294634 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07656290-2175-4bf8-a5f3-cb4ffade894f" (UID: "07656290-2175-4bf8-a5f3-cb4ffade894f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.294683 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access\") pod \"07656290-2175-4bf8-a5f3-cb4ffade894f\" (UID: \"07656290-2175-4bf8-a5f3-cb4ffade894f\") " Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.294902 5122 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07656290-2175-4bf8-a5f3-cb4ffade894f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.301351 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07656290-2175-4bf8-a5f3-cb4ffade894f" (UID: "07656290-2175-4bf8-a5f3-cb4ffade894f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.336067 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.336419 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nq2x2" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="registry-server" containerID="cri-o://04ecd8b705cc24867fefe94d285b3a9b9e26991c34584a65e4ed98558683fd0c" gracePeriod=2 Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.396368 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07656290-2175-4bf8-a5f3-cb4ffade894f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.805387 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.864713 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.888909 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.899050 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.899097 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"07656290-2175-4bf8-a5f3-cb4ffade894f","Type":"ContainerDied","Data":"1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908"} Feb 24 00:11:36 crc kubenswrapper[5122]: I0224 00:11:36.899160 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1566dbc49eadad0eff509686bea6d5cb0c2dd87fb7feb84529aee63f879bc908" Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.336185 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.386341 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.439547 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.888768 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.916412 5122 generic.go:358] "Generic (PLEG): container finished" podID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerID="04ecd8b705cc24867fefe94d285b3a9b9e26991c34584a65e4ed98558683fd0c" exitCode=0 Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.916486 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerDied","Data":"04ecd8b705cc24867fefe94d285b3a9b9e26991c34584a65e4ed98558683fd0c"} Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.916706 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpmsz" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="registry-server" containerID="cri-o://a7ea428518339d603dd70aefdd9a4afca194821a3698ffe50846ef707e0b0e7d" gracePeriod=2 Feb 24 00:11:37 crc kubenswrapper[5122]: I0224 00:11:37.965226 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.116538 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities\") pod \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.116648 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84zj2\" (UniqueName: \"kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2\") pod \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.116726 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content\") pod \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\" (UID: \"b28573a3-2eb9-41dd-8eb5-7a4f9b677028\") " Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.117481 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities" (OuterVolumeSpecName: "utilities") pod "b28573a3-2eb9-41dd-8eb5-7a4f9b677028" (UID: "b28573a3-2eb9-41dd-8eb5-7a4f9b677028"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.126561 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2" (OuterVolumeSpecName: "kube-api-access-84zj2") pod "b28573a3-2eb9-41dd-8eb5-7a4f9b677028" (UID: "b28573a3-2eb9-41dd-8eb5-7a4f9b677028"). InnerVolumeSpecName "kube-api-access-84zj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.158238 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b28573a3-2eb9-41dd-8eb5-7a4f9b677028" (UID: "b28573a3-2eb9-41dd-8eb5-7a4f9b677028"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.218376 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.218422 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.218435 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-84zj2\" (UniqueName: \"kubernetes.io/projected/b28573a3-2eb9-41dd-8eb5-7a4f9b677028-kube-api-access-84zj2\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.923366 5122 generic.go:358] "Generic (PLEG): container finished" podID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerID="a7ea428518339d603dd70aefdd9a4afca194821a3698ffe50846ef707e0b0e7d" exitCode=0 Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.923402 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerDied","Data":"a7ea428518339d603dd70aefdd9a4afca194821a3698ffe50846ef707e0b0e7d"} Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.925773 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2x2" event={"ID":"b28573a3-2eb9-41dd-8eb5-7a4f9b677028","Type":"ContainerDied","Data":"60cd59977e44e7ad534cc221ed683fd8ec4e62d8c7e46359b4d5405291776990"} Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.925887 5122 scope.go:117] "RemoveContainer" containerID="04ecd8b705cc24867fefe94d285b3a9b9e26991c34584a65e4ed98558683fd0c" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.925799 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2x2" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.942977 5122 scope.go:117] "RemoveContainer" containerID="c4861903e11616effb8456ac4de0d38ac536642b3103c17be78642ef830ddd29" Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.954644 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.958686 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nq2x2"] Feb 24 00:11:38 crc kubenswrapper[5122]: I0224 00:11:38.958750 5122 scope.go:117] "RemoveContainer" containerID="9a87e704005b97403dd44dfdb6f5d26fe37d7539b7183b390d44e3d76a60b4ec" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533096 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533893 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="registry-server" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533913 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="registry-server" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533941 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="extract-content" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533949 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="extract-content" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533962 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07656290-2175-4bf8-a5f3-cb4ffade894f" containerName="pruner" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.533970 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="07656290-2175-4bf8-a5f3-cb4ffade894f" containerName="pruner" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.534003 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="extract-utilities" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.534012 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="extract-utilities" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.534151 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" containerName="registry-server" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.534173 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="07656290-2175-4bf8-a5f3-cb4ffade894f" containerName="pruner" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.537520 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.539713 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.540293 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.540359 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.629831 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.634886 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.634939 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.635034 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736192 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities\") pod \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736287 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content\") pod \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736351 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc8vs\" (UniqueName: \"kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs\") pod \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\" (UID: \"780f6ddc-69b1-4e7e-ac47-c5dccdde6537\") " Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736485 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736512 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736582 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736648 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.736959 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.737254 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pjd62" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="registry-server" containerID="cri-o://fd2aaf78bf590887747bd5731b3fbcaf63f311e83487389cad08069cf91d97da" gracePeriod=2 Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.737520 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities" (OuterVolumeSpecName: "utilities") pod "780f6ddc-69b1-4e7e-ac47-c5dccdde6537" (UID: "780f6ddc-69b1-4e7e-ac47-c5dccdde6537"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.737854 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.742670 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs" (OuterVolumeSpecName: "kube-api-access-sc8vs") pod "780f6ddc-69b1-4e7e-ac47-c5dccdde6537" (UID: "780f6ddc-69b1-4e7e-ac47-c5dccdde6537"). InnerVolumeSpecName "kube-api-access-sc8vs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.757410 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access\") pod \"installer-12-crc\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.783836 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28573a3-2eb9-41dd-8eb5-7a4f9b677028" path="/var/lib/kubelet/pods/b28573a3-2eb9-41dd-8eb5-7a4f9b677028/volumes" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.784948 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "780f6ddc-69b1-4e7e-ac47-c5dccdde6537" (UID: "780f6ddc-69b1-4e7e-ac47-c5dccdde6537"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.838259 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sc8vs\" (UniqueName: \"kubernetes.io/projected/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-kube-api-access-sc8vs\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.838295 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.838304 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780f6ddc-69b1-4e7e-ac47-c5dccdde6537-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.859969 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.942990 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmsz" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.943009 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmsz" event={"ID":"780f6ddc-69b1-4e7e-ac47-c5dccdde6537","Type":"ContainerDied","Data":"1d4c041e45aaef05c715936d7cb8d604b460dea7a3e54a3def867336db5032f6"} Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.943066 5122 scope.go:117] "RemoveContainer" containerID="a7ea428518339d603dd70aefdd9a4afca194821a3698ffe50846ef707e0b0e7d" Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.972644 5122 generic.go:358] "Generic (PLEG): container finished" podID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerID="fd2aaf78bf590887747bd5731b3fbcaf63f311e83487389cad08069cf91d97da" exitCode=0 Feb 24 00:11:39 crc kubenswrapper[5122]: I0224 00:11:39.972707 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerDied","Data":"fd2aaf78bf590887747bd5731b3fbcaf63f311e83487389cad08069cf91d97da"} Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.082452 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.090634 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpmsz"] Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.096254 5122 scope.go:117] "RemoveContainer" containerID="7915b4fe8b18be5e9b8d72d7f7b4431b7514a76803a29c976013a92af82e6984" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.134977 5122 scope.go:117] "RemoveContainer" containerID="5ef11ad801daaf2a03ba4da307a2c5594e8b70c8262e4c29b2b730e1cfec63e9" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.219248 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.226181 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.343010 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content\") pod \"e9e21a16-c724-46be-8e7c-c8987db90f7b\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.343346 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrptj\" (UniqueName: \"kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj\") pod \"e9e21a16-c724-46be-8e7c-c8987db90f7b\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.343380 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities\") pod \"e9e21a16-c724-46be-8e7c-c8987db90f7b\" (UID: \"e9e21a16-c724-46be-8e7c-c8987db90f7b\") " Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.344837 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities" (OuterVolumeSpecName: "utilities") pod "e9e21a16-c724-46be-8e7c-c8987db90f7b" (UID: "e9e21a16-c724-46be-8e7c-c8987db90f7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.358274 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj" (OuterVolumeSpecName: "kube-api-access-nrptj") pod "e9e21a16-c724-46be-8e7c-c8987db90f7b" (UID: "e9e21a16-c724-46be-8e7c-c8987db90f7b"). InnerVolumeSpecName "kube-api-access-nrptj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.445136 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrptj\" (UniqueName: \"kubernetes.io/projected/e9e21a16-c724-46be-8e7c-c8987db90f7b-kube-api-access-nrptj\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.445180 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.463377 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9e21a16-c724-46be-8e7c-c8987db90f7b" (UID: "e9e21a16-c724-46be-8e7c-c8987db90f7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.546061 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e21a16-c724-46be-8e7c-c8987db90f7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.736491 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.737139 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s6zx7" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="registry-server" containerID="cri-o://f0164600e88b951324f595a79cdef52ee93afe86659f00421e8f8dd33b47f209" gracePeriod=2 Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.982892 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f590e646-1918-49c0-a565-b475676fa33c","Type":"ContainerStarted","Data":"f2e2d914f3830ad954b2bebfd9f358264274d7bcdf050ef6169c908f056df9eb"} Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.982936 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f590e646-1918-49c0-a565-b475676fa33c","Type":"ContainerStarted","Data":"ca2c0ea0c19863e6961cdd00aa67c7a08ad5c60b336cfffa08128ec29876e10a"} Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.993781 5122 generic.go:358] "Generic (PLEG): container finished" podID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerID="f0164600e88b951324f595a79cdef52ee93afe86659f00421e8f8dd33b47f209" exitCode=0 Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.993906 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerDied","Data":"f0164600e88b951324f595a79cdef52ee93afe86659f00421e8f8dd33b47f209"} Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.996950 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.996938758 podStartE2EDuration="1.996938758s" podCreationTimestamp="2026-02-24 00:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:40.99593313 +0000 UTC m=+168.085387653" watchObservedRunningTime="2026-02-24 00:11:40.996938758 +0000 UTC m=+168.086393271" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.997354 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjd62" event={"ID":"e9e21a16-c724-46be-8e7c-c8987db90f7b","Type":"ContainerDied","Data":"18ed4484a29636e64fe788527a21f069608863e74fa0af5bdec8a62417f40867"} Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.997389 5122 scope.go:117] "RemoveContainer" containerID="fd2aaf78bf590887747bd5731b3fbcaf63f311e83487389cad08069cf91d97da" Feb 24 00:11:40 crc kubenswrapper[5122]: I0224 00:11:40.997492 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjd62" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.018618 5122 scope.go:117] "RemoveContainer" containerID="26ca68a4375c712801d8269e9beb936af1304fc84db0c92cf6f5c9706dcc55ab" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.072260 5122 scope.go:117] "RemoveContainer" containerID="27aafd5a2abb0483a73c9dc4f241f8bebfa263753e1b35543e146e7994a73775" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.083646 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.099002 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.103032 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pjd62"] Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.253913 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities\") pod \"8c82e1f6-e130-4361-a3a2-13613f953cbb\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.254122 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c9bb\" (UniqueName: \"kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb\") pod \"8c82e1f6-e130-4361-a3a2-13613f953cbb\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.254192 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content\") pod \"8c82e1f6-e130-4361-a3a2-13613f953cbb\" (UID: \"8c82e1f6-e130-4361-a3a2-13613f953cbb\") " Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.255256 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities" (OuterVolumeSpecName: "utilities") pod "8c82e1f6-e130-4361-a3a2-13613f953cbb" (UID: "8c82e1f6-e130-4361-a3a2-13613f953cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.259972 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb" (OuterVolumeSpecName: "kube-api-access-9c9bb") pod "8c82e1f6-e130-4361-a3a2-13613f953cbb" (UID: "8c82e1f6-e130-4361-a3a2-13613f953cbb"). InnerVolumeSpecName "kube-api-access-9c9bb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.270129 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c82e1f6-e130-4361-a3a2-13613f953cbb" (UID: "8c82e1f6-e130-4361-a3a2-13613f953cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.355963 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.356005 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c82e1f6-e130-4361-a3a2-13613f953cbb-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.356021 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9c9bb\" (UniqueName: \"kubernetes.io/projected/8c82e1f6-e130-4361-a3a2-13613f953cbb-kube-api-access-9c9bb\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.783709 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" path="/var/lib/kubelet/pods/780f6ddc-69b1-4e7e-ac47-c5dccdde6537/volumes" Feb 24 00:11:41 crc kubenswrapper[5122]: I0224 00:11:41.784604 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" path="/var/lib/kubelet/pods/e9e21a16-c724-46be-8e7c-c8987db90f7b/volumes" Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.005059 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zx7" Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.005087 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zx7" event={"ID":"8c82e1f6-e130-4361-a3a2-13613f953cbb","Type":"ContainerDied","Data":"01abcee582b4017f627f1892ecd7eb02143367a64dbf3d85a3139b38d715a006"} Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.006490 5122 scope.go:117] "RemoveContainer" containerID="f0164600e88b951324f595a79cdef52ee93afe86659f00421e8f8dd33b47f209" Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.020123 5122 scope.go:117] "RemoveContainer" containerID="76d8d1140e59ff2ec26ded58fbb5fcbff2c2d0c8be2d7d03ad5b9436203bb4ce" Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.026698 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.031433 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zx7"] Feb 24 00:11:42 crc kubenswrapper[5122]: I0224 00:11:42.042339 5122 scope.go:117] "RemoveContainer" containerID="b468ab622d33f6b7278d88cfaf23aff3b0b625d5c3be038465667e7ec2bb0de2" Feb 24 00:11:43 crc kubenswrapper[5122]: I0224 00:11:43.782409 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" path="/var/lib/kubelet/pods/8c82e1f6-e130-4361-a3a2-13613f953cbb/volumes" Feb 24 00:12:14 crc kubenswrapper[5122]: I0224 00:12:14.505226 5122 ???:1] "http: TLS handshake error from 192.168.126.11:38196: no serving certificate available for the kubelet" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.424324 5122 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.425460 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f05afbe3b3aa2acbf7bb698b08b183431cb39128be15abf2ee678640de1a2f9" gracePeriod=15 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.425459 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://cfbf4f7e6544aaa90a5b7583d6b85e287ed0d459941edf55d5ac1fda8a1c905a" gracePeriod=15 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.425631 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7e62718d5fa1a2c8d163a016ae2607ec93029e94464ebf0518d890c39534e4b0" gracePeriod=15 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.425722 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a68c1527a3daaf2edd8a58adc3928d53f63266e661d665d090ae7d0850e50d2e" gracePeriod=15 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.425789 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cd66379a5e0fec18bb00729a9f9015cac040f0c1bc1927f73a7a5603f8d6fe10" gracePeriod=15 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427324 5122 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427946 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427961 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427974 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427984 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427992 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.427999 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428009 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428016 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428028 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428035 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428046 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428054 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428062 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428097 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428107 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428114 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428126 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428133 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428143 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428151 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428160 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428166 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428178 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428185 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="extract-content" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428195 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428202 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428214 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428221 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428236 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428243 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="extract-utilities" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428257 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428264 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428275 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428281 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428294 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428300 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428408 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428421 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428430 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="e9e21a16-c724-46be-8e7c-c8987db90f7b" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428438 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428447 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="780f6ddc-69b1-4e7e-ac47-c5dccdde6537" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428456 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c82e1f6-e130-4361-a3a2-13613f953cbb" containerName="registry-server" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428465 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428477 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428485 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428496 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428632 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428642 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428753 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.428765 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.472554 5122 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.488291 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.525581 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: E0224 00:12:18.526312 5122 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.541263 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.541312 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543378 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543399 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543414 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543434 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543449 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543476 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543501 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.543520 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644654 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644696 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644715 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644742 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644798 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644800 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644822 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644836 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644867 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644898 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644951 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.644999 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645023 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645068 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645103 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645147 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645205 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645279 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645527 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.645553 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.682206 5122 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.682254 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 24 00:12:18 crc kubenswrapper[5122]: E0224 00:12:18.682753 5122 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event=< Feb 24 00:12:18 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-apiserver-crc.1897065491886f26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:6443/readyz": dial tcp 192.168.126.11:6443: connect: connection refused Feb 24 00:12:18 crc kubenswrapper[5122]: body: Feb 24 00:12:18 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:12:18.68223671 +0000 UTC m=+205.771691223,LastTimestamp:2026-02-24 00:12:18.68223671 +0000 UTC m=+205.771691223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:12:18 crc kubenswrapper[5122]: > Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.746146 5122 generic.go:358] "Generic (PLEG): container finished" podID="f590e646-1918-49c0-a565-b475676fa33c" containerID="f2e2d914f3830ad954b2bebfd9f358264274d7bcdf050ef6169c908f056df9eb" exitCode=0 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.746242 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f590e646-1918-49c0-a565-b475676fa33c","Type":"ContainerDied","Data":"f2e2d914f3830ad954b2bebfd9f358264274d7bcdf050ef6169c908f056df9eb"} Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.748942 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.750394 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.751180 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4f05afbe3b3aa2acbf7bb698b08b183431cb39128be15abf2ee678640de1a2f9" exitCode=0 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.751198 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a68c1527a3daaf2edd8a58adc3928d53f63266e661d665d090ae7d0850e50d2e" exitCode=0 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.751205 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cd66379a5e0fec18bb00729a9f9015cac040f0c1bc1927f73a7a5603f8d6fe10" exitCode=0 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.751210 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7e62718d5fa1a2c8d163a016ae2607ec93029e94464ebf0518d890c39534e4b0" exitCode=2 Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.751241 5122 scope.go:117] "RemoveContainer" containerID="e11c5ab9165474052e75cdbfe8a15bc344fef4b42fbdc570821cc5355d0bf98e" Feb 24 00:12:18 crc kubenswrapper[5122]: I0224 00:12:18.827376 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:19 crc kubenswrapper[5122]: I0224 00:12:19.761524 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 24 00:12:19 crc kubenswrapper[5122]: I0224 00:12:19.764510 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"0668b72723e4a9b9d496fb22b9cf7edd31b2000ebfa0e159054e19af4a1cc758"} Feb 24 00:12:19 crc kubenswrapper[5122]: I0224 00:12:19.764880 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"7c82c93583f340fd57e2ce4e01db0085a063ad4c47bda0d49a96923b66601efd"} Feb 24 00:12:19 crc kubenswrapper[5122]: I0224 00:12:19.765322 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:19 crc kubenswrapper[5122]: E0224 00:12:19.765943 5122 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.081056 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.165846 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access\") pod \"f590e646-1918-49c0-a565-b475676fa33c\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.166134 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir\") pod \"f590e646-1918-49c0-a565-b475676fa33c\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.166177 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock\") pod \"f590e646-1918-49c0-a565-b475676fa33c\" (UID: \"f590e646-1918-49c0-a565-b475676fa33c\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.166652 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock" (OuterVolumeSpecName: "var-lock") pod "f590e646-1918-49c0-a565-b475676fa33c" (UID: "f590e646-1918-49c0-a565-b475676fa33c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.166631 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f590e646-1918-49c0-a565-b475676fa33c" (UID: "f590e646-1918-49c0-a565-b475676fa33c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.176629 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f590e646-1918-49c0-a565-b475676fa33c" (UID: "f590e646-1918-49c0-a565-b475676fa33c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.267740 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f590e646-1918-49c0-a565-b475676fa33c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.267782 5122 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.267794 5122 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f590e646-1918-49c0-a565-b475676fa33c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.772893 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f590e646-1918-49c0-a565-b475676fa33c","Type":"ContainerDied","Data":"ca2c0ea0c19863e6961cdd00aa67c7a08ad5c60b336cfffa08128ec29876e10a"} Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.773231 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2c0ea0c19863e6961cdd00aa67c7a08ad5c60b336cfffa08128ec29876e10a" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.773057 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.776749 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.777498 5122 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cfbf4f7e6544aaa90a5b7583d6b85e287ed0d459941edf55d5ac1fda8a1c905a" exitCode=0 Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.830630 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.831605 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875585 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875695 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875703 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875778 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875835 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875869 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.875949 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876012 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876388 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876601 5122 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876624 5122 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876636 5122 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.876647 5122 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.878008 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:12:20 crc kubenswrapper[5122]: I0224 00:12:20.978159 5122 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.784861 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.788590 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.789738 5122 scope.go:117] "RemoveContainer" containerID="4f05afbe3b3aa2acbf7bb698b08b183431cb39128be15abf2ee678640de1a2f9" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.789768 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.813792 5122 scope.go:117] "RemoveContainer" containerID="a68c1527a3daaf2edd8a58adc3928d53f63266e661d665d090ae7d0850e50d2e" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.829403 5122 scope.go:117] "RemoveContainer" containerID="cd66379a5e0fec18bb00729a9f9015cac040f0c1bc1927f73a7a5603f8d6fe10" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.847219 5122 scope.go:117] "RemoveContainer" containerID="7e62718d5fa1a2c8d163a016ae2607ec93029e94464ebf0518d890c39534e4b0" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.862282 5122 scope.go:117] "RemoveContainer" containerID="cfbf4f7e6544aaa90a5b7583d6b85e287ed0d459941edf55d5ac1fda8a1c905a" Feb 24 00:12:21 crc kubenswrapper[5122]: I0224 00:12:21.879652 5122 scope.go:117] "RemoveContainer" containerID="4e2e508b94b0720c8553587b8cfb2f3ad7a5265f46b8e90239d02595822736e9" Feb 24 00:12:23 crc kubenswrapper[5122]: I0224 00:12:23.491163 5122 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:23 crc kubenswrapper[5122]: I0224 00:12:23.491892 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:23 crc kubenswrapper[5122]: I0224 00:12:23.492414 5122 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:23 crc kubenswrapper[5122]: I0224 00:12:23.781578 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:23 crc kubenswrapper[5122]: I0224 00:12:23.782634 5122 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:25 crc kubenswrapper[5122]: E0224 00:12:25.794926 5122 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" volumeName="registry-storage" Feb 24 00:12:27 crc kubenswrapper[5122]: I0224 00:12:27.116389 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:12:27 crc kubenswrapper[5122]: I0224 00:12:27.117407 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:12:27 crc kubenswrapper[5122]: E0224 00:12:27.873162 5122 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event=< Feb 24 00:12:27 crc kubenswrapper[5122]: &Event{ObjectMeta:{kube-apiserver-crc.1897065491886f26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:6443/readyz": dial tcp 192.168.126.11:6443: connect: connection refused Feb 24 00:12:27 crc kubenswrapper[5122]: body: Feb 24 00:12:27 crc kubenswrapper[5122]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:12:18.68223671 +0000 UTC m=+205.771691223,LastTimestamp:2026-02-24 00:12:18.68223671 +0000 UTC m=+205.771691223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 00:12:27 crc kubenswrapper[5122]: > Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.554980 5122 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.555978 5122 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.556475 5122 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.556996 5122 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.558159 5122 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:28 crc kubenswrapper[5122]: I0224 00:12:28.558382 5122 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.558922 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Feb 24 00:12:28 crc kubenswrapper[5122]: E0224 00:12:28.760221 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Feb 24 00:12:29 crc kubenswrapper[5122]: E0224 00:12:29.161491 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Feb 24 00:12:29 crc kubenswrapper[5122]: E0224 00:12:29.962206 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Feb 24 00:12:31 crc kubenswrapper[5122]: E0224 00:12:31.563357 5122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.857364 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.857788 5122 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="8cd870d8a5266d17b821eea88d085de06b8be9f1ffb9d281f7f78e4e68bcf7f5" exitCode=1 Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.857932 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"8cd870d8a5266d17b821eea88d085de06b8be9f1ffb9d281f7f78e4e68bcf7f5"} Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.858838 5122 scope.go:117] "RemoveContainer" containerID="8cd870d8a5266d17b821eea88d085de06b8be9f1ffb9d281f7f78e4e68bcf7f5" Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.859366 5122 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:31 crc kubenswrapper[5122]: I0224 00:12:31.860190 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.774489 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.776276 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.776823 5122 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.791459 5122 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.791500 5122 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:32 crc kubenswrapper[5122]: E0224 00:12:32.792184 5122 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.792534 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:32 crc kubenswrapper[5122]: W0224 00:12:32.819187 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-7817a50c21f6fffd0742f0de1d6e4deaf362cd0da8647b24893c0f75cc31bbe9 WatchSource:0}: Error finding container 7817a50c21f6fffd0742f0de1d6e4deaf362cd0da8647b24893c0f75cc31bbe9: Status 404 returned error can't find the container with id 7817a50c21f6fffd0742f0de1d6e4deaf362cd0da8647b24893c0f75cc31bbe9 Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.871319 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7817a50c21f6fffd0742f0de1d6e4deaf362cd0da8647b24893c0f75cc31bbe9"} Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.875938 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.876040 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6ef8c43e20b79a7dab824881c3aed17be8fee8a4204d2a7df863dbb9e3431ab6"} Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.877146 5122 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:32 crc kubenswrapper[5122]: I0224 00:12:32.877825 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.780292 5122 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.781064 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.781721 5122 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.885062 5122 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="0146b66826ce9efae500a4f0801d0adf067933611cde40f8c871a06452843e0a" exitCode=0 Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.885203 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"0146b66826ce9efae500a4f0801d0adf067933611cde40f8c871a06452843e0a"} Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.885407 5122 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.885423 5122 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:33 crc kubenswrapper[5122]: E0224 00:12:33.885792 5122 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.885939 5122 status_manager.go:895] "Failed to get status for pod" podUID="f590e646-1918-49c0-a565-b475676fa33c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.886265 5122 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:33 crc kubenswrapper[5122]: I0224 00:12:33.886587 5122 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.351419 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.359990 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.857981 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.898206 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d8c9d5356d9c974634aa80b5a1ff7255803f8d50637c53ec1d772b4da4db1373"} Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.898263 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8fb4076f2373cac2894d7bd5c3f28c6f2f95aa3cbe928ecace7b4c09ef656be6"} Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.898278 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8c0cf1fe47150288d8a1793b5fcc97d17db0e57370ea40eeb7f90a9e73f36095"} Feb 24 00:12:34 crc kubenswrapper[5122]: I0224 00:12:34.898291 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1a81af93956ab2c93ea3b615fac9dd197398e38f1bbdd5e5034ca5332906a7d3"} Feb 24 00:12:35 crc kubenswrapper[5122]: I0224 00:12:35.906347 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"46be887172f498069eb0bb1ea4c5ae1c5fea2a9598eae145487593c4122642e6"} Feb 24 00:12:35 crc kubenswrapper[5122]: I0224 00:12:35.906615 5122 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:35 crc kubenswrapper[5122]: I0224 00:12:35.906632 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:35 crc kubenswrapper[5122]: I0224 00:12:35.906641 5122 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:37 crc kubenswrapper[5122]: I0224 00:12:37.793458 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:37 crc kubenswrapper[5122]: I0224 00:12:37.793511 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:37 crc kubenswrapper[5122]: I0224 00:12:37.798269 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.915651 5122 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.915945 5122 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.937161 5122 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.937201 5122 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.942525 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:12:40 crc kubenswrapper[5122]: I0224 00:12:40.945504 5122 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d338bd6b-6863-44b7-8fe2-958fef094f83" Feb 24 00:12:41 crc kubenswrapper[5122]: I0224 00:12:41.943749 5122 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:41 crc kubenswrapper[5122]: I0224 00:12:41.944090 5122 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b0a35ad-c3da-4754-8842-c052ad912e2e" Feb 24 00:12:43 crc kubenswrapper[5122]: I0224 00:12:43.786658 5122 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d338bd6b-6863-44b7-8fe2-958fef094f83" Feb 24 00:12:45 crc kubenswrapper[5122]: I0224 00:12:45.916459 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:49 crc kubenswrapper[5122]: I0224 00:12:49.912303 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 24 00:12:51 crc kubenswrapper[5122]: I0224 00:12:51.540885 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 24 00:12:51 crc kubenswrapper[5122]: I0224 00:12:51.703289 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:51 crc kubenswrapper[5122]: I0224 00:12:51.927188 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.026571 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.040952 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.141641 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.165526 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.462127 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.478639 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.538117 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.632534 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.755463 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.785174 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.824510 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 24 00:12:52 crc kubenswrapper[5122]: I0224 00:12:52.865596 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.033283 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.053623 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.121775 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.131558 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.176139 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.193990 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.285000 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.292626 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.410475 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.431974 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.507146 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.533525 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.568375 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.621186 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.750523 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.756675 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.774452 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.800986 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.878590 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 24 00:12:53 crc kubenswrapper[5122]: I0224 00:12:53.921308 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.031177 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.049589 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.091604 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.147178 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.175723 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.241391 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.284259 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.316647 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.323112 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.344839 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.429910 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.519899 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.531512 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.599921 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.650702 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.677857 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.699961 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.783062 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.804556 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.825953 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.833442 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.852484 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.887784 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.892552 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.904319 5122 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:12:54 crc kubenswrapper[5122]: I0224 00:12:54.998914 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.052621 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.095770 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.125291 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.139338 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.193183 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.214278 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.246739 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.364816 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.402824 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.405624 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.451368 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.570153 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.613807 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.689600 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.699836 5122 ???:1] "http: TLS handshake error from 192.168.126.11:37442: no serving certificate available for the kubelet" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.725357 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.729664 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.816680 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.974726 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.991257 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 24 00:12:55 crc kubenswrapper[5122]: I0224 00:12:55.996164 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.028596 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.141149 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.182719 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.334032 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.433779 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.494326 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.521802 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.560131 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.611189 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.621217 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.625674 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.629030 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.637571 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.639121 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.747148 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.781056 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.796196 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:56 crc kubenswrapper[5122]: I0224 00:12:56.928595 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.029161 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.063403 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.068110 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.073357 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.115215 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.115281 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.288235 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.299484 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.474583 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.483144 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.489912 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.495408 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.825949 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.885562 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.889232 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 24 00:12:57 crc kubenswrapper[5122]: I0224 00:12:57.955280 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.014149 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.133739 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.163683 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.282967 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.289044 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.354753 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.373760 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.398872 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.401974 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.437371 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.560212 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.591252 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.694307 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.828230 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.828653 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.870272 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.916622 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.918495 5122 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.931293 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 24 00:12:58 crc kubenswrapper[5122]: I0224 00:12:58.932728 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.043439 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.090479 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.194811 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.208714 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.296626 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.314398 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.336357 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.477939 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.494466 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.513415 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.526068 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.605031 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.777747 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.785121 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.788549 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.857672 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.870760 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 24 00:12:59 crc kubenswrapper[5122]: I0224 00:12:59.994142 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.045738 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.106171 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.132705 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.167050 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.235436 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.244785 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.253598 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.292922 5122 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.304161 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.314162 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.450130 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.463601 5122 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.475490 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.674331 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.754876 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.798566 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.846003 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.877427 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.939620 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.940274 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 24 00:13:00 crc kubenswrapper[5122]: I0224 00:13:00.984438 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.017144 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.048819 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.068529 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.118572 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.142606 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.143858 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.313923 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.336512 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.399521 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.411844 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.429043 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.479364 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.646237 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.695364 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.729220 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.779959 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.792930 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.812469 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.826017 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.840495 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.910412 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 24 00:13:01 crc kubenswrapper[5122]: I0224 00:13:01.946546 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.230121 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.311939 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.462141 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.583564 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.780986 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.816198 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.825289 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.847999 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.859390 5122 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.875971 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.878578 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:02 crc kubenswrapper[5122]: I0224 00:13:02.912406 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.084909 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.122836 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.192811 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.230945 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.298622 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.333600 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.403893 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.419817 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.444871 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.559837 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.662791 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.739961 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.813711 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.818513 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.926953 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.943712 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.979684 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:03 crc kubenswrapper[5122]: I0224 00:13:03.980525 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.035699 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.082183 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.135701 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.222858 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.239181 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.251802 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.264994 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.462033 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.492912 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.653566 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.785563 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.822756 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:04 crc kubenswrapper[5122]: I0224 00:13:04.950502 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.184895 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.370688 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.407989 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.529840 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.573122 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.682370 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.721397 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:05 crc kubenswrapper[5122]: I0224 00:13:05.988634 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 24 00:13:06 crc kubenswrapper[5122]: I0224 00:13:06.050665 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 24 00:13:06 crc kubenswrapper[5122]: I0224 00:13:06.152970 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 24 00:13:06 crc kubenswrapper[5122]: I0224 00:13:06.223470 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 24 00:13:06 crc kubenswrapper[5122]: I0224 00:13:06.273333 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 24 00:13:06 crc kubenswrapper[5122]: I0224 00:13:06.578613 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.200417 5122 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.208090 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.208145 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.215510 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.230817 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.230791127 podStartE2EDuration="27.230791127s" podCreationTimestamp="2026-02-24 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:07.228090815 +0000 UTC m=+254.317545338" watchObservedRunningTime="2026-02-24 00:13:07.230791127 +0000 UTC m=+254.320245670" Feb 24 00:13:07 crc kubenswrapper[5122]: I0224 00:13:07.437777 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 24 00:13:14 crc kubenswrapper[5122]: I0224 00:13:14.681708 5122 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:13:14 crc kubenswrapper[5122]: I0224 00:13:14.682167 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://0668b72723e4a9b9d496fb22b9cf7edd31b2000ebfa0e159054e19af4a1cc758" gracePeriod=5 Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.179645 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.180248 5122 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="0668b72723e4a9b9d496fb22b9cf7edd31b2000ebfa0e159054e19af4a1cc758" exitCode=137 Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.268200 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.268566 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.270411 5122 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.293005 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.293301 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.293500 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.293145 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.293372 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.294406 5122 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.294568 5122 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.300760 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.396195 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.396695 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.396348 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.396745 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.397223 5122 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.397326 5122 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:20 crc kubenswrapper[5122]: I0224 00:13:20.397403 5122 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:21 crc kubenswrapper[5122]: I0224 00:13:21.189187 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 24 00:13:21 crc kubenswrapper[5122]: I0224 00:13:21.189390 5122 scope.go:117] "RemoveContainer" containerID="0668b72723e4a9b9d496fb22b9cf7edd31b2000ebfa0e159054e19af4a1cc758" Feb 24 00:13:21 crc kubenswrapper[5122]: I0224 00:13:21.189508 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:13:21 crc kubenswrapper[5122]: I0224 00:13:21.211275 5122 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 24 00:13:21 crc kubenswrapper[5122]: I0224 00:13:21.782566 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 24 00:13:23 crc kubenswrapper[5122]: I0224 00:13:23.208113 5122 generic.go:358] "Generic (PLEG): container finished" podID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerID="a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e" exitCode=0 Feb 24 00:13:23 crc kubenswrapper[5122]: I0224 00:13:23.208199 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerDied","Data":"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e"} Feb 24 00:13:23 crc kubenswrapper[5122]: I0224 00:13:23.208894 5122 scope.go:117] "RemoveContainer" containerID="a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e" Feb 24 00:13:24 crc kubenswrapper[5122]: I0224 00:13:24.222540 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerStarted","Data":"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19"} Feb 24 00:13:24 crc kubenswrapper[5122]: I0224 00:13:24.223012 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:13:24 crc kubenswrapper[5122]: I0224 00:13:24.225136 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.080646 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.082271 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" containerID="cri-o://71707b15587d62be35b26914938324551ba0b49f5fc6fa3d78a7b035ad1b168f" gracePeriod=30 Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.130898 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.132847 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" containerID="cri-o://4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4" gracePeriod=30 Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.243848 5122 generic.go:358] "Generic (PLEG): container finished" podID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerID="71707b15587d62be35b26914938324551ba0b49f5fc6fa3d78a7b035ad1b168f" exitCode=0 Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.243945 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" event={"ID":"36b3c56e-ec77-4507-a2c4-8556b0239225","Type":"ContainerDied","Data":"71707b15587d62be35b26914938324551ba0b49f5fc6fa3d78a7b035ad1b168f"} Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.442918 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.448374 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479293 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qphr7\" (UniqueName: \"kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479340 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479408 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca\") pod \"e8179910-a8d8-4190-89c7-fe04a9f19e86\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479439 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp\") pod \"e8179910-a8d8-4190-89c7-fe04a9f19e86\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479462 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479492 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2ks4\" (UniqueName: \"kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4\") pod \"e8179910-a8d8-4190-89c7-fe04a9f19e86\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479533 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479626 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479654 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert\") pod \"e8179910-a8d8-4190-89c7-fe04a9f19e86\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479679 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca\") pod \"36b3c56e-ec77-4507-a2c4-8556b0239225\" (UID: \"36b3c56e-ec77-4507-a2c4-8556b0239225\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.479727 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config\") pod \"e8179910-a8d8-4190-89c7-fe04a9f19e86\" (UID: \"e8179910-a8d8-4190-89c7-fe04a9f19e86\") " Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.480823 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config" (OuterVolumeSpecName: "config") pod "e8179910-a8d8-4190-89c7-fe04a9f19e86" (UID: "e8179910-a8d8-4190-89c7-fe04a9f19e86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.483312 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp" (OuterVolumeSpecName: "tmp") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.484240 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8179910-a8d8-4190-89c7-fe04a9f19e86" (UID: "e8179910-a8d8-4190-89c7-fe04a9f19e86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.484779 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.485928 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp" (OuterVolumeSpecName: "tmp") pod "e8179910-a8d8-4190-89c7-fe04a9f19e86" (UID: "e8179910-a8d8-4190-89c7-fe04a9f19e86"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.486525 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca" (OuterVolumeSpecName: "client-ca") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.486919 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config" (OuterVolumeSpecName: "config") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.487197 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4" (OuterVolumeSpecName: "kube-api-access-h2ks4") pod "e8179910-a8d8-4190-89c7-fe04a9f19e86" (UID: "e8179910-a8d8-4190-89c7-fe04a9f19e86"). InnerVolumeSpecName "kube-api-access-h2ks4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.487233 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7" (OuterVolumeSpecName: "kube-api-access-qphr7") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "kube-api-access-qphr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488108 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488416 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488451 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488479 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488491 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488518 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488528 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488542 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f590e646-1918-49c0-a565-b475676fa33c" containerName="installer" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488552 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="f590e646-1918-49c0-a565-b475676fa33c" containerName="installer" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488714 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerName="route-controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488736 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" containerName="controller-manager" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488749 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="f590e646-1918-49c0-a565-b475676fa33c" containerName="installer" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.488758 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.490646 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "36b3c56e-ec77-4507-a2c4-8556b0239225" (UID: "36b3c56e-ec77-4507-a2c4-8556b0239225"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.490762 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8179910-a8d8-4190-89c7-fe04a9f19e86" (UID: "e8179910-a8d8-4190-89c7-fe04a9f19e86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.496161 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.497783 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.514403 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.520131 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.520250 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.580856 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.580912 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.580952 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmcl\" (UniqueName: \"kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.580981 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581008 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581033 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xv8\" (UniqueName: \"kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581057 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581131 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581167 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581205 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581228 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581268 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b3c56e-ec77-4507-a2c4-8556b0239225-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581283 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b3c56e-ec77-4507-a2c4-8556b0239225-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581294 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8179910-a8d8-4190-89c7-fe04a9f19e86-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581303 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581314 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581324 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qphr7\" (UniqueName: \"kubernetes.io/projected/36b3c56e-ec77-4507-a2c4-8556b0239225-kube-api-access-qphr7\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581335 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581345 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8179910-a8d8-4190-89c7-fe04a9f19e86-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581355 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8179910-a8d8-4190-89c7-fe04a9f19e86-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581366 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b3c56e-ec77-4507-a2c4-8556b0239225-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.581375 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2ks4\" (UniqueName: \"kubernetes.io/projected/e8179910-a8d8-4190-89c7-fe04a9f19e86-kube-api-access-h2ks4\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.682991 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683058 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmcl\" (UniqueName: \"kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683116 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683147 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683172 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7xv8\" (UniqueName: \"kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683208 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683232 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683264 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683296 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683321 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.683347 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.684001 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.684666 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.684667 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.684666 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.685284 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.685578 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.686506 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.688570 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.688570 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.704730 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmcl\" (UniqueName: \"kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl\") pod \"route-controller-manager-8569f5b7b7-fgmwh\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.714152 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7xv8\" (UniqueName: \"kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8\") pod \"controller-manager-8b6998464-dljph\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.757624 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.820608 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:26 crc kubenswrapper[5122]: I0224 00:13:26.836601 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.098542 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:27 crc kubenswrapper[5122]: W0224 00:13:27.102968 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab76d06a_8a15_4887_b5f0_d56a5fd0b684.slice/crio-00d6dd9958abf71d5fca5a8de722cfc9cda52687f30d746a0ab65b88b29460ca WatchSource:0}: Error finding container 00d6dd9958abf71d5fca5a8de722cfc9cda52687f30d746a0ab65b88b29460ca: Status 404 returned error can't find the container with id 00d6dd9958abf71d5fca5a8de722cfc9cda52687f30d746a0ab65b88b29460ca Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.115283 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.115349 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.115399 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.119519 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.119643 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b" gracePeriod=600 Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.139105 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:27 crc kubenswrapper[5122]: W0224 00:13:27.141436 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e099e78_6972_4572_8c55_fc2797b0809d.slice/crio-6877478bf12d9f65a8df7530842679f4da61deca74db9c900090c0276551df6b WatchSource:0}: Error finding container 6877478bf12d9f65a8df7530842679f4da61deca74db9c900090c0276551df6b: Status 404 returned error can't find the container with id 6877478bf12d9f65a8df7530842679f4da61deca74db9c900090c0276551df6b Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.261479 5122 generic.go:358] "Generic (PLEG): container finished" podID="e8179910-a8d8-4190-89c7-fe04a9f19e86" containerID="4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4" exitCode=0 Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.261904 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" event={"ID":"e8179910-a8d8-4190-89c7-fe04a9f19e86","Type":"ContainerDied","Data":"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.261933 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" event={"ID":"e8179910-a8d8-4190-89c7-fe04a9f19e86","Type":"ContainerDied","Data":"c8cf085ac037c3d973c613fd826b92dc1aafcf166a1471346f1ade9750f234b3"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.261949 5122 scope.go:117] "RemoveContainer" containerID="4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.262059 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.282135 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.282148 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lxjqf" event={"ID":"36b3c56e-ec77-4507-a2c4-8556b0239225","Type":"ContainerDied","Data":"a91e3a982914624b69218491025c2e3940e1f2815b54d5ad50918456bc68a7d2"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.292462 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" event={"ID":"3e099e78-6972-4572-8c55-fc2797b0809d","Type":"ContainerStarted","Data":"6877478bf12d9f65a8df7530842679f4da61deca74db9c900090c0276551df6b"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.292707 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.294289 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" event={"ID":"ab76d06a-8a15-4887-b5f0-d56a5fd0b684","Type":"ContainerStarted","Data":"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.294329 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" event={"ID":"ab76d06a-8a15-4887-b5f0-d56a5fd0b684","Type":"ContainerStarted","Data":"00d6dd9958abf71d5fca5a8de722cfc9cda52687f30d746a0ab65b88b29460ca"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.294427 5122 patch_prober.go:28] interesting pod/route-controller-manager-8569f5b7b7-fgmwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.294469 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.294778 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.295406 5122 patch_prober.go:28] interesting pod/controller-manager-8b6998464-dljph container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.295445 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.312584 5122 scope.go:117] "RemoveContainer" containerID="4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.316860 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:13:27 crc kubenswrapper[5122]: E0224 00:13:27.323465 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4\": container with ID starting with 4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4 not found: ID does not exist" containerID="4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.323501 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4"} err="failed to get container status \"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4\": rpc error: code = NotFound desc = could not find container \"4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4\": container with ID starting with 4da46be65b4285e02ea44f10f7ed3cfeb95e546a1af6ee207af0738dd961afd4 not found: ID does not exist" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.323524 5122 scope.go:117] "RemoveContainer" containerID="71707b15587d62be35b26914938324551ba0b49f5fc6fa3d78a7b035ad1b168f" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.323683 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.325011 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b" exitCode=0 Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.325116 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b"} Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.330586 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-b5hst"] Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.349665 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.353892 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" podStartSLOduration=1.3538769849999999 podStartE2EDuration="1.353876985s" podCreationTimestamp="2026-02-24 00:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:27.346750295 +0000 UTC m=+274.436204818" watchObservedRunningTime="2026-02-24 00:13:27.353876985 +0000 UTC m=+274.443331498" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.374016 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" podStartSLOduration=1.373991483 podStartE2EDuration="1.373991483s" podCreationTimestamp="2026-02-24 00:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:27.365099166 +0000 UTC m=+274.454553699" watchObservedRunningTime="2026-02-24 00:13:27.373991483 +0000 UTC m=+274.463445996" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.383438 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.386880 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lxjqf"] Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.793781 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b3c56e-ec77-4507-a2c4-8556b0239225" path="/var/lib/kubelet/pods/36b3c56e-ec77-4507-a2c4-8556b0239225/volumes" Feb 24 00:13:27 crc kubenswrapper[5122]: I0224 00:13:27.794783 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8179910-a8d8-4190-89c7-fe04a9f19e86" path="/var/lib/kubelet/pods/e8179910-a8d8-4190-89c7-fe04a9f19e86/volumes" Feb 24 00:13:28 crc kubenswrapper[5122]: I0224 00:13:28.338037 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f"} Feb 24 00:13:28 crc kubenswrapper[5122]: I0224 00:13:28.346291 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" event={"ID":"3e099e78-6972-4572-8c55-fc2797b0809d","Type":"ContainerStarted","Data":"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69"} Feb 24 00:13:28 crc kubenswrapper[5122]: I0224 00:13:28.358511 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:28 crc kubenswrapper[5122]: I0224 00:13:28.360284 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.166022 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.351630 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" containerName="route-controller-manager" containerID="cri-o://225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69" gracePeriod=30 Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.353410 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerName="controller-manager" containerID="cri-o://ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964" gracePeriod=30 Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.714482 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.718580 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741064 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741625 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerName="controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741643 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerName="controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741652 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" containerName="route-controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741657 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" containerName="route-controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741752 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerName="controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.741762 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" containerName="route-controller-manager" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.748573 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.751114 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.758389 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.762550 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.785908 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846450 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca\") pod \"3e099e78-6972-4572-8c55-fc2797b0809d\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846519 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config\") pod \"3e099e78-6972-4572-8c55-fc2797b0809d\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846561 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846649 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvmcl\" (UniqueName: \"kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl\") pod \"3e099e78-6972-4572-8c55-fc2797b0809d\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846725 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7xv8\" (UniqueName: \"kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.846774 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847020 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp\") pod \"3e099e78-6972-4572-8c55-fc2797b0809d\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847158 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847197 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e099e78-6972-4572-8c55-fc2797b0809d" (UID: "3e099e78-6972-4572-8c55-fc2797b0809d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847233 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert\") pod \"3e099e78-6972-4572-8c55-fc2797b0809d\" (UID: \"3e099e78-6972-4572-8c55-fc2797b0809d\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847300 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp" (OuterVolumeSpecName: "tmp") pod "3e099e78-6972-4572-8c55-fc2797b0809d" (UID: "3e099e78-6972-4572-8c55-fc2797b0809d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847303 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847353 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp\") pod \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\" (UID: \"ab76d06a-8a15-4887-b5f0-d56a5fd0b684\") " Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847492 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847513 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72bg\" (UniqueName: \"kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847530 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847587 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847606 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847628 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847642 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847681 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847716 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847745 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgwj\" (UniqueName: \"kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847743 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config" (OuterVolumeSpecName: "config") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847768 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847810 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e099e78-6972-4572-8c55-fc2797b0809d-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847820 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847816 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.847828 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.848137 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca" (OuterVolumeSpecName: "client-ca") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.848197 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp" (OuterVolumeSpecName: "tmp") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.848788 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config" (OuterVolumeSpecName: "config") pod "3e099e78-6972-4572-8c55-fc2797b0809d" (UID: "3e099e78-6972-4572-8c55-fc2797b0809d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.852691 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e099e78-6972-4572-8c55-fc2797b0809d" (UID: "3e099e78-6972-4572-8c55-fc2797b0809d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.853201 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8" (OuterVolumeSpecName: "kube-api-access-f7xv8") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "kube-api-access-f7xv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.853875 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl" (OuterVolumeSpecName: "kube-api-access-gvmcl") pod "3e099e78-6972-4572-8c55-fc2797b0809d" (UID: "3e099e78-6972-4572-8c55-fc2797b0809d"). InnerVolumeSpecName "kube-api-access-gvmcl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.854323 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab76d06a-8a15-4887-b5f0-d56a5fd0b684" (UID: "ab76d06a-8a15-4887-b5f0-d56a5fd0b684"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948733 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948781 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948804 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948821 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948857 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.948882 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949032 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgwj\" (UniqueName: \"kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949122 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949203 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949228 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x72bg\" (UniqueName: \"kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949251 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949319 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e099e78-6972-4572-8c55-fc2797b0809d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949344 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949362 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949378 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e099e78-6972-4572-8c55-fc2797b0809d-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949394 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949412 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvmcl\" (UniqueName: \"kubernetes.io/projected/3e099e78-6972-4572-8c55-fc2797b0809d-kube-api-access-gvmcl\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949429 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f7xv8\" (UniqueName: \"kubernetes.io/projected/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-kube-api-access-f7xv8\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.949446 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab76d06a-8a15-4887-b5f0-d56a5fd0b684-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.950124 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.950565 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.951013 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.951422 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.951456 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.951982 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.952500 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.956036 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.957473 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.975037 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgwj\" (UniqueName: \"kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj\") pod \"controller-manager-768cb5858b-4kns6\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:29 crc kubenswrapper[5122]: I0224 00:13:29.980116 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72bg\" (UniqueName: \"kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg\") pod \"route-controller-manager-5cc5b9dd6d-t8fs9\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.064642 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.082132 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.278585 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:13:30 crc kubenswrapper[5122]: W0224 00:13:30.285205 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f8b06d_e3ea_4363_9d24_77f35473b5a7.slice/crio-cca4a2647f1aeb8a451305d4a1b08c0f99a04889716e713491dd411e9be3dfca WatchSource:0}: Error finding container cca4a2647f1aeb8a451305d4a1b08c0f99a04889716e713491dd411e9be3dfca: Status 404 returned error can't find the container with id cca4a2647f1aeb8a451305d4a1b08c0f99a04889716e713491dd411e9be3dfca Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.316145 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:30 crc kubenswrapper[5122]: W0224 00:13:30.322350 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd6a021_d7aa_4c6c_9a98_b5a5b4ea800d.slice/crio-588dd76163ad6cca2cd759ba578802c912a6c8edda2f5890727f8e16ac552ad3 WatchSource:0}: Error finding container 588dd76163ad6cca2cd759ba578802c912a6c8edda2f5890727f8e16ac552ad3: Status 404 returned error can't find the container with id 588dd76163ad6cca2cd759ba578802c912a6c8edda2f5890727f8e16ac552ad3 Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.361602 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" event={"ID":"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d","Type":"ContainerStarted","Data":"588dd76163ad6cca2cd759ba578802c912a6c8edda2f5890727f8e16ac552ad3"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.362948 5122 generic.go:358] "Generic (PLEG): container finished" podID="3e099e78-6972-4572-8c55-fc2797b0809d" containerID="225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69" exitCode=0 Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.363014 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" event={"ID":"3e099e78-6972-4572-8c55-fc2797b0809d","Type":"ContainerDied","Data":"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.363040 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" event={"ID":"3e099e78-6972-4572-8c55-fc2797b0809d","Type":"ContainerDied","Data":"6877478bf12d9f65a8df7530842679f4da61deca74db9c900090c0276551df6b"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.363056 5122 scope.go:117] "RemoveContainer" containerID="225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.363202 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.365597 5122 generic.go:358] "Generic (PLEG): container finished" podID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" containerID="ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964" exitCode=0 Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.365689 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.367253 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" event={"ID":"ab76d06a-8a15-4887-b5f0-d56a5fd0b684","Type":"ContainerDied","Data":"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.367290 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-dljph" event={"ID":"ab76d06a-8a15-4887-b5f0-d56a5fd0b684","Type":"ContainerDied","Data":"00d6dd9958abf71d5fca5a8de722cfc9cda52687f30d746a0ab65b88b29460ca"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.372589 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" event={"ID":"39f8b06d-e3ea-4363-9d24-77f35473b5a7","Type":"ContainerStarted","Data":"cca4a2647f1aeb8a451305d4a1b08c0f99a04889716e713491dd411e9be3dfca"} Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.393315 5122 scope.go:117] "RemoveContainer" containerID="225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69" Feb 24 00:13:30 crc kubenswrapper[5122]: E0224 00:13:30.394516 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69\": container with ID starting with 225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69 not found: ID does not exist" containerID="225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.394561 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69"} err="failed to get container status \"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69\": rpc error: code = NotFound desc = could not find container \"225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69\": container with ID starting with 225a07c9eb0042d5f00666aaa18e97dbfc33713b8b29619b53de6712ee081c69 not found: ID does not exist" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.394584 5122 scope.go:117] "RemoveContainer" containerID="ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.409220 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.415635 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-dljph"] Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.421046 5122 scope.go:117] "RemoveContainer" containerID="ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964" Feb 24 00:13:30 crc kubenswrapper[5122]: E0224 00:13:30.421522 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964\": container with ID starting with ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964 not found: ID does not exist" containerID="ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.421552 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964"} err="failed to get container status \"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964\": rpc error: code = NotFound desc = could not find container \"ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964\": container with ID starting with ea5ea331a137a80e723b8f8a5c309278b8fd8521dc15f87536b648116cfa9964 not found: ID does not exist" Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.432467 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:30 crc kubenswrapper[5122]: I0224 00:13:30.435544 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-fgmwh"] Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.379853 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" event={"ID":"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d","Type":"ContainerStarted","Data":"4a0b192dc2582a104cd4f683d9b88fe6e4b5a6097194f34355528490530e79e4"} Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.380686 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.384276 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" event={"ID":"39f8b06d-e3ea-4363-9d24-77f35473b5a7","Type":"ContainerStarted","Data":"17b4a334d55eef6980f1a662b943df339600c95ea5ffeb5f46bcee64a116bb7a"} Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.384480 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.386848 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.404049 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" podStartSLOduration=4.404026977 podStartE2EDuration="4.404026977s" podCreationTimestamp="2026-02-24 00:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:31.401183565 +0000 UTC m=+278.490638078" watchObservedRunningTime="2026-02-24 00:13:31.404026977 +0000 UTC m=+278.493481500" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.465994 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.495135 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" podStartSLOduration=4.495113973 podStartE2EDuration="4.495113973s" podCreationTimestamp="2026-02-24 00:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:31.475050896 +0000 UTC m=+278.564505419" watchObservedRunningTime="2026-02-24 00:13:31.495113973 +0000 UTC m=+278.584568506" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.786596 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e099e78-6972-4572-8c55-fc2797b0809d" path="/var/lib/kubelet/pods/3e099e78-6972-4572-8c55-fc2797b0809d/volumes" Feb 24 00:13:31 crc kubenswrapper[5122]: I0224 00:13:31.787511 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab76d06a-8a15-4887-b5f0-d56a5fd0b684" path="/var/lib/kubelet/pods/ab76d06a-8a15-4887-b5f0-d56a5fd0b684/volumes" Feb 24 00:13:33 crc kubenswrapper[5122]: I0224 00:13:33.171132 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 24 00:13:36 crc kubenswrapper[5122]: I0224 00:13:36.450209 5122 ???:1] "http: TLS handshake error from 192.168.126.11:60682: no serving certificate available for the kubelet" Feb 24 00:13:40 crc kubenswrapper[5122]: I0224 00:13:40.829666 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.098034 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.098857 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" podUID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" containerName="route-controller-manager" containerID="cri-o://4a0b192dc2582a104cd4f683d9b88fe6e4b5a6097194f34355528490530e79e4" gracePeriod=30 Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.478523 5122 generic.go:358] "Generic (PLEG): container finished" podID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" containerID="4a0b192dc2582a104cd4f683d9b88fe6e4b5a6097194f34355528490530e79e4" exitCode=0 Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.478642 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" event={"ID":"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d","Type":"ContainerDied","Data":"4a0b192dc2582a104cd4f683d9b88fe6e4b5a6097194f34355528490530e79e4"} Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.517544 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.545915 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz"] Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.546528 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" containerName="route-controller-manager" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.546550 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" containerName="route-controller-manager" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.546675 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" containerName="route-controller-manager" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.553029 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.558628 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz"] Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.577800 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config\") pod \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.577867 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca\") pod \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.577926 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp\") pod \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.577987 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert\") pod \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.578031 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x72bg\" (UniqueName: \"kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg\") pod \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\" (UID: \"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d\") " Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.578616 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp" (OuterVolumeSpecName: "tmp") pod "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" (UID: "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.581456 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca" (OuterVolumeSpecName: "client-ca") pod "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" (UID: "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.581644 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config" (OuterVolumeSpecName: "config") pod "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" (UID: "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.587730 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg" (OuterVolumeSpecName: "kube-api-access-x72bg") pod "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" (UID: "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d"). InnerVolumeSpecName "kube-api-access-x72bg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.587744 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" (UID: "add6a021-d7aa-4c6c-9a98-b5a5b4ea800d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679417 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f20071bd-f6cb-47a2-92f3-253162ae446d-tmp\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679473 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8x5\" (UniqueName: \"kubernetes.io/projected/f20071bd-f6cb-47a2-92f3-253162ae446d-kube-api-access-xs8x5\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679508 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f20071bd-f6cb-47a2-92f3-253162ae446d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679525 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-client-ca\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679655 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-config\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679894 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679944 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x72bg\" (UniqueName: \"kubernetes.io/projected/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-kube-api-access-x72bg\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679966 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679983 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.679999 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.780925 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f20071bd-f6cb-47a2-92f3-253162ae446d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.781211 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-client-ca\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.781334 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-config\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.781423 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f20071bd-f6cb-47a2-92f3-253162ae446d-tmp\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.781453 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8x5\" (UniqueName: \"kubernetes.io/projected/f20071bd-f6cb-47a2-92f3-253162ae446d-kube-api-access-xs8x5\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.781768 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f20071bd-f6cb-47a2-92f3-253162ae446d-tmp\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.782202 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-client-ca\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.782462 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20071bd-f6cb-47a2-92f3-253162ae446d-config\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.785453 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f20071bd-f6cb-47a2-92f3-253162ae446d-serving-cert\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.798931 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8x5\" (UniqueName: \"kubernetes.io/projected/f20071bd-f6cb-47a2-92f3-253162ae446d-kube-api-access-xs8x5\") pod \"route-controller-manager-8569f5b7b7-4pwlz\" (UID: \"f20071bd-f6cb-47a2-92f3-253162ae446d\") " pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:46 crc kubenswrapper[5122]: I0224 00:13:46.882426 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.334679 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz"] Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.486043 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" event={"ID":"f20071bd-f6cb-47a2-92f3-253162ae446d","Type":"ContainerStarted","Data":"92d4029da9bfbcf6c7996ee98b1e6ba92423157a26056d6bd9693355bfcebecf"} Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.487876 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" event={"ID":"add6a021-d7aa-4c6c-9a98-b5a5b4ea800d","Type":"ContainerDied","Data":"588dd76163ad6cca2cd759ba578802c912a6c8edda2f5890727f8e16ac552ad3"} Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.487927 5122 scope.go:117] "RemoveContainer" containerID="4a0b192dc2582a104cd4f683d9b88fe6e4b5a6097194f34355528490530e79e4" Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.487964 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9" Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.521406 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.524664 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc5b9dd6d-t8fs9"] Feb 24 00:13:47 crc kubenswrapper[5122]: I0224 00:13:47.782496 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add6a021-d7aa-4c6c-9a98-b5a5b4ea800d" path="/var/lib/kubelet/pods/add6a021-d7aa-4c6c-9a98-b5a5b4ea800d/volumes" Feb 24 00:13:48 crc kubenswrapper[5122]: I0224 00:13:48.494489 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" event={"ID":"f20071bd-f6cb-47a2-92f3-253162ae446d","Type":"ContainerStarted","Data":"6fd45689dc5ef30df5bab6bd7cd2b55f56ad1030300cd394d1439dd3e7f8bb7b"} Feb 24 00:13:48 crc kubenswrapper[5122]: I0224 00:13:48.494795 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:48 crc kubenswrapper[5122]: I0224 00:13:48.501441 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" Feb 24 00:13:48 crc kubenswrapper[5122]: I0224 00:13:48.514206 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8569f5b7b7-4pwlz" podStartSLOduration=2.514184898 podStartE2EDuration="2.514184898s" podCreationTimestamp="2026-02-24 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:48.511243984 +0000 UTC m=+295.600698567" watchObservedRunningTime="2026-02-24 00:13:48.514184898 +0000 UTC m=+295.603639411" Feb 24 00:13:50 crc kubenswrapper[5122]: I0224 00:13:50.508312 5122 generic.go:358] "Generic (PLEG): container finished" podID="5247eba3-d3c0-4892-a371-f5d13f08c178" containerID="34f039fdca56d903945596a284aabc764a5d7329958f022f40df64db2f5aa266" exitCode=0 Feb 24 00:13:50 crc kubenswrapper[5122]: I0224 00:13:50.508807 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-qpcf6" event={"ID":"5247eba3-d3c0-4892-a371-f5d13f08c178","Type":"ContainerDied","Data":"34f039fdca56d903945596a284aabc764a5d7329958f022f40df64db2f5aa266"} Feb 24 00:13:51 crc kubenswrapper[5122]: I0224 00:13:51.851513 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:13:51 crc kubenswrapper[5122]: I0224 00:13:51.947483 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skcbn\" (UniqueName: \"kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn\") pod \"5247eba3-d3c0-4892-a371-f5d13f08c178\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " Feb 24 00:13:51 crc kubenswrapper[5122]: I0224 00:13:51.947578 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca\") pod \"5247eba3-d3c0-4892-a371-f5d13f08c178\" (UID: \"5247eba3-d3c0-4892-a371-f5d13f08c178\") " Feb 24 00:13:51 crc kubenswrapper[5122]: I0224 00:13:51.948384 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca" (OuterVolumeSpecName: "serviceca") pod "5247eba3-d3c0-4892-a371-f5d13f08c178" (UID: "5247eba3-d3c0-4892-a371-f5d13f08c178"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:51 crc kubenswrapper[5122]: I0224 00:13:51.954727 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn" (OuterVolumeSpecName: "kube-api-access-skcbn") pod "5247eba3-d3c0-4892-a371-f5d13f08c178" (UID: "5247eba3-d3c0-4892-a371-f5d13f08c178"). InnerVolumeSpecName "kube-api-access-skcbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:52 crc kubenswrapper[5122]: I0224 00:13:52.049460 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skcbn\" (UniqueName: \"kubernetes.io/projected/5247eba3-d3c0-4892-a371-f5d13f08c178-kube-api-access-skcbn\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:52 crc kubenswrapper[5122]: I0224 00:13:52.049508 5122 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5247eba3-d3c0-4892-a371-f5d13f08c178-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:52 crc kubenswrapper[5122]: I0224 00:13:52.526500 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-qpcf6" event={"ID":"5247eba3-d3c0-4892-a371-f5d13f08c178","Type":"ContainerDied","Data":"c841b6b4eb1b2a2c1d15f94d1e1abb27d2e98163c32330548f59d72b7c8b8dc8"} Feb 24 00:13:52 crc kubenswrapper[5122]: I0224 00:13:52.526548 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-qpcf6" Feb 24 00:13:52 crc kubenswrapper[5122]: I0224 00:13:52.526565 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c841b6b4eb1b2a2c1d15f94d1e1abb27d2e98163c32330548f59d72b7c8b8dc8" Feb 24 00:13:53 crc kubenswrapper[5122]: I0224 00:13:53.948800 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:13:53 crc kubenswrapper[5122]: I0224 00:13:53.950520 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.198163 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" containerID="cri-o://072a6fdcc62f17f651130910a6d42386ee53cf9d58f3b81f875e515860b25532" gracePeriod=15 Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.547815 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" event={"ID":"58f519ba-9b81-416e-8f29-0c84e8607ab1","Type":"ContainerDied","Data":"072a6fdcc62f17f651130910a6d42386ee53cf9d58f3b81f875e515860b25532"} Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.547872 5122 generic.go:358] "Generic (PLEG): container finished" podID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerID="072a6fdcc62f17f651130910a6d42386ee53cf9d58f3b81f875e515860b25532" exitCode=0 Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.706219 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731086 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-54594675bf-tck7l"] Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731790 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731815 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731848 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5247eba3-d3c0-4892-a371-f5d13f08c178" containerName="image-pruner" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731856 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="5247eba3-d3c0-4892-a371-f5d13f08c178" containerName="image-pruner" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731967 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" containerName="oauth-openshift" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.731987 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="5247eba3-d3c0-4892-a371-f5d13f08c178" containerName="image-pruner" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.739290 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.749654 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54594675bf-tck7l"] Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.780885 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.780935 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.780974 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.780990 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781031 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781057 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781098 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781132 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781191 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781246 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781298 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781325 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781352 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781378 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk2np\" (UniqueName: \"kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np\") pod \"58f519ba-9b81-416e-8f29-0c84e8607ab1\" (UID: \"58f519ba-9b81-416e-8f29-0c84e8607ab1\") " Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.781632 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.782016 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.782423 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.782636 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.783215 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.786636 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.786836 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np" (OuterVolumeSpecName: "kube-api-access-gk2np") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "kube-api-access-gk2np". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.786842 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.786988 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.787555 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.787777 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.787930 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.788250 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.793421 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "58f519ba-9b81-416e-8f29-0c84e8607ab1" (UID: "58f519ba-9b81-416e-8f29-0c84e8607ab1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882507 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882570 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-session\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882606 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-audit-policies\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882648 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882674 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-error\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882698 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-login\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882761 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64535f09-1739-480e-984c-7f42b6310355-audit-dir\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882847 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882874 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882900 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882948 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-service-ca\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.882970 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6j6\" (UniqueName: \"kubernetes.io/projected/64535f09-1739-480e-984c-7f42b6310355-kube-api-access-tj6j6\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883017 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883039 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-router-certs\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883247 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883280 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883300 5122 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883320 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gk2np\" (UniqueName: \"kubernetes.io/projected/58f519ba-9b81-416e-8f29-0c84e8607ab1-kube-api-access-gk2np\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883341 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883358 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883376 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883392 5122 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883410 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883426 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883443 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883461 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883478 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.883495 5122 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58f519ba-9b81-416e-8f29-0c84e8607ab1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.984936 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-session\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.984983 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-audit-policies\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985000 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985018 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-error\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985034 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-login\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985073 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64535f09-1739-480e-984c-7f42b6310355-audit-dir\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985108 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985134 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985163 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985203 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-service-ca\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985226 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6j6\" (UniqueName: \"kubernetes.io/projected/64535f09-1739-480e-984c-7f42b6310355-kube-api-access-tj6j6\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985262 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985279 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-router-certs\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.985313 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.986354 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64535f09-1739-480e-984c-7f42b6310355-audit-dir\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.987746 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.987781 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.987827 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-audit-policies\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.988325 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-service-ca\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.990830 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-session\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.992679 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-error\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.992676 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.992917 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.993799 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-user-template-login\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.994204 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.994295 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:54 crc kubenswrapper[5122]: I0224 00:13:54.994522 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64535f09-1739-480e-984c-7f42b6310355-v4-0-config-system-router-certs\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.013998 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6j6\" (UniqueName: \"kubernetes.io/projected/64535f09-1739-480e-984c-7f42b6310355-kube-api-access-tj6j6\") pod \"oauth-openshift-54594675bf-tck7l\" (UID: \"64535f09-1739-480e-984c-7f42b6310355\") " pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.060547 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.509901 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54594675bf-tck7l"] Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.513698 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.555464 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" event={"ID":"64535f09-1739-480e-984c-7f42b6310355","Type":"ContainerStarted","Data":"40b93ce990fa80c1eb657dac8a345b517db4aded7d32f4bff2d110d9febe5966"} Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.557623 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" event={"ID":"58f519ba-9b81-416e-8f29-0c84e8607ab1","Type":"ContainerDied","Data":"4b4995a3c2be72c70d56a395d2d255110ef153b8a9097833488090db647d603f"} Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.557652 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-jnnfl" Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.557687 5122 scope.go:117] "RemoveContainer" containerID="072a6fdcc62f17f651130910a6d42386ee53cf9d58f3b81f875e515860b25532" Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.619490 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.625024 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-jnnfl"] Feb 24 00:13:55 crc kubenswrapper[5122]: I0224 00:13:55.781902 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f519ba-9b81-416e-8f29-0c84e8607ab1" path="/var/lib/kubelet/pods/58f519ba-9b81-416e-8f29-0c84e8607ab1/volumes" Feb 24 00:13:56 crc kubenswrapper[5122]: I0224 00:13:56.565956 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" event={"ID":"64535f09-1739-480e-984c-7f42b6310355","Type":"ContainerStarted","Data":"a4f24e3113ff3e2fea818fead991e71e18852375ab09802acab1f3143da6ee93"} Feb 24 00:13:56 crc kubenswrapper[5122]: I0224 00:13:56.567553 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:56 crc kubenswrapper[5122]: I0224 00:13:56.575971 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" Feb 24 00:13:56 crc kubenswrapper[5122]: I0224 00:13:56.595731 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54594675bf-tck7l" podStartSLOduration=27.59570517 podStartE2EDuration="27.59570517s" podCreationTimestamp="2026-02-24 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:13:56.58856975 +0000 UTC m=+303.678024283" watchObservedRunningTime="2026-02-24 00:13:56.59570517 +0000 UTC m=+303.685159723" Feb 24 00:14:05 crc kubenswrapper[5122]: I0224 00:14:05.378231 5122 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:14:06 crc kubenswrapper[5122]: I0224 00:14:06.109029 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:14:06 crc kubenswrapper[5122]: I0224 00:14:06.109346 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" podUID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" containerName="controller-manager" containerID="cri-o://17b4a334d55eef6980f1a662b943df339600c95ea5ffeb5f46bcee64a116bb7a" gracePeriod=30 Feb 24 00:14:06 crc kubenswrapper[5122]: I0224 00:14:06.627586 5122 generic.go:358] "Generic (PLEG): container finished" podID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" containerID="17b4a334d55eef6980f1a662b943df339600c95ea5ffeb5f46bcee64a116bb7a" exitCode=0 Feb 24 00:14:06 crc kubenswrapper[5122]: I0224 00:14:06.628385 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" event={"ID":"39f8b06d-e3ea-4363-9d24-77f35473b5a7","Type":"ContainerDied","Data":"17b4a334d55eef6980f1a662b943df339600c95ea5ffeb5f46bcee64a116bb7a"} Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.226238 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.268249 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-pwbg5"] Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.268967 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" containerName="controller-manager" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.268985 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" containerName="controller-manager" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.269150 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" containerName="controller-manager" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.275429 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-pwbg5"] Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.275559 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279387 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279445 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279469 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279550 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xgwj\" (UniqueName: \"kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279665 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279734 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config\") pod \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\" (UID: \"39f8b06d-e3ea-4363-9d24-77f35473b5a7\") " Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.280191 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.280791 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.280780 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config" (OuterVolumeSpecName: "config") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.279955 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp" (OuterVolumeSpecName: "tmp") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.285409 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj" (OuterVolumeSpecName: "kube-api-access-5xgwj") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "kube-api-access-5xgwj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.288304 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39f8b06d-e3ea-4363-9d24-77f35473b5a7" (UID: "39f8b06d-e3ea-4363-9d24-77f35473b5a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381102 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-client-ca\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381177 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj6cd\" (UniqueName: \"kubernetes.io/projected/f44bcc52-b32c-44cf-b865-892824f21a1f-kube-api-access-rj6cd\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381208 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-config\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381278 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f44bcc52-b32c-44cf-b865-892824f21a1f-tmp\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381435 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44bcc52-b32c-44cf-b865-892824f21a1f-serving-cert\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381537 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-proxy-ca-bundles\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381619 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/39f8b06d-e3ea-4363-9d24-77f35473b5a7-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381630 5122 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381639 5122 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381651 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xgwj\" (UniqueName: \"kubernetes.io/projected/39f8b06d-e3ea-4363-9d24-77f35473b5a7-kube-api-access-5xgwj\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381659 5122 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39f8b06d-e3ea-4363-9d24-77f35473b5a7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.381668 5122 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39f8b06d-e3ea-4363-9d24-77f35473b5a7-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.482973 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f44bcc52-b32c-44cf-b865-892824f21a1f-tmp\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483055 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44bcc52-b32c-44cf-b865-892824f21a1f-serving-cert\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483123 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-proxy-ca-bundles\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483276 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-client-ca\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483374 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rj6cd\" (UniqueName: \"kubernetes.io/projected/f44bcc52-b32c-44cf-b865-892824f21a1f-kube-api-access-rj6cd\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483407 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-config\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.483536 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f44bcc52-b32c-44cf-b865-892824f21a1f-tmp\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.484192 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-proxy-ca-bundles\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.484229 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-client-ca\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.484716 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44bcc52-b32c-44cf-b865-892824f21a1f-config\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.486680 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44bcc52-b32c-44cf-b865-892824f21a1f-serving-cert\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.498301 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj6cd\" (UniqueName: \"kubernetes.io/projected/f44bcc52-b32c-44cf-b865-892824f21a1f-kube-api-access-rj6cd\") pod \"controller-manager-8b6998464-pwbg5\" (UID: \"f44bcc52-b32c-44cf-b865-892824f21a1f\") " pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.612196 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.634784 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.634819 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768cb5858b-4kns6" event={"ID":"39f8b06d-e3ea-4363-9d24-77f35473b5a7","Type":"ContainerDied","Data":"cca4a2647f1aeb8a451305d4a1b08c0f99a04889716e713491dd411e9be3dfca"} Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.635325 5122 scope.go:117] "RemoveContainer" containerID="17b4a334d55eef6980f1a662b943df339600c95ea5ffeb5f46bcee64a116bb7a" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.687146 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.691560 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-768cb5858b-4kns6"] Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.781283 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f8b06d-e3ea-4363-9d24-77f35473b5a7" path="/var/lib/kubelet/pods/39f8b06d-e3ea-4363-9d24-77f35473b5a7/volumes" Feb 24 00:14:07 crc kubenswrapper[5122]: I0224 00:14:07.807277 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b6998464-pwbg5"] Feb 24 00:14:08 crc kubenswrapper[5122]: I0224 00:14:08.644251 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" event={"ID":"f44bcc52-b32c-44cf-b865-892824f21a1f","Type":"ContainerStarted","Data":"b3d73486257c6ee26853c4c8ef4a4bb694e6b5bcc98748649be30d17f29ce44e"} Feb 24 00:14:08 crc kubenswrapper[5122]: I0224 00:14:08.644725 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:08 crc kubenswrapper[5122]: I0224 00:14:08.644752 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" event={"ID":"f44bcc52-b32c-44cf-b865-892824f21a1f","Type":"ContainerStarted","Data":"1e081d9cd77ebb0dd8be7ee33a20a9396f36f100385d1ccbb36af967b03a9169"} Feb 24 00:14:08 crc kubenswrapper[5122]: I0224 00:14:08.672976 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" podStartSLOduration=2.672951184 podStartE2EDuration="2.672951184s" podCreationTimestamp="2026-02-24 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:14:08.667218249 +0000 UTC m=+315.756672852" watchObservedRunningTime="2026-02-24 00:14:08.672951184 +0000 UTC m=+315.762405737" Feb 24 00:14:08 crc kubenswrapper[5122]: I0224 00:14:08.948067 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b6998464-pwbg5" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.753847 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.754727 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d5844" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="registry-server" containerID="cri-o://1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e" gracePeriod=30 Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.760638 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.763053 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvn26" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="registry-server" containerID="cri-o://256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162" gracePeriod=30 Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.771147 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.771466 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" containerID="cri-o://6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19" gracePeriod=30 Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.775945 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.776474 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cpk76" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="registry-server" containerID="cri-o://aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2" gracePeriod=30 Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.790591 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.798704 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4rhfb" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="registry-server" containerID="cri-o://3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216" gracePeriod=30 Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.805295 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j9nvj"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.817153 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j9nvj"] Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.817316 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.893530 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64eca3ba-8cf7-44c8-9c06-302240cb10d9-tmp\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.893583 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj6g\" (UniqueName: \"kubernetes.io/projected/64eca3ba-8cf7-44c8-9c06-302240cb10d9-kube-api-access-shj6g\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.893615 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.893654 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.995281 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.995467 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64eca3ba-8cf7-44c8-9c06-302240cb10d9-tmp\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.995494 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-shj6g\" (UniqueName: \"kubernetes.io/projected/64eca3ba-8cf7-44c8-9c06-302240cb10d9-kube-api-access-shj6g\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.995555 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.996417 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:30 crc kubenswrapper[5122]: I0224 00:14:30.997809 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64eca3ba-8cf7-44c8-9c06-302240cb10d9-tmp\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.007392 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64eca3ba-8cf7-44c8-9c06-302240cb10d9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.014750 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj6g\" (UniqueName: \"kubernetes.io/projected/64eca3ba-8cf7-44c8-9c06-302240cb10d9-kube-api-access-shj6g\") pod \"marketplace-operator-547dbd544d-j9nvj\" (UID: \"64eca3ba-8cf7-44c8-9c06-302240cb10d9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.156827 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.160693 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.197903 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca\") pod \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.197952 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp\") pod \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.197994 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwfnj\" (UniqueName: \"kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj\") pod \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.198018 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics\") pod \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\" (UID: \"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.198996 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" (UID: "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.199272 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp" (OuterVolumeSpecName: "tmp") pod "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" (UID: "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.201962 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj" (OuterVolumeSpecName: "kube-api-access-kwfnj") pod "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" (UID: "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1"). InnerVolumeSpecName "kube-api-access-kwfnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.204873 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" (UID: "1f5902ff-7a31-4f4d-bc37-fd77aa5714f1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.289895 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.298844 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content\") pod \"b49afeaf-b456-453e-899d-8fccce0a72b9\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.298976 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtgkd\" (UniqueName: \"kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd\") pod \"b49afeaf-b456-453e-899d-8fccce0a72b9\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.299042 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities\") pod \"b49afeaf-b456-453e-899d-8fccce0a72b9\" (UID: \"b49afeaf-b456-453e-899d-8fccce0a72b9\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.299317 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwfnj\" (UniqueName: \"kubernetes.io/projected/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-kube-api-access-kwfnj\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.299331 5122 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.299343 5122 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.299355 5122 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.300580 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities" (OuterVolumeSpecName: "utilities") pod "b49afeaf-b456-453e-899d-8fccce0a72b9" (UID: "b49afeaf-b456-453e-899d-8fccce0a72b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.310267 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd" (OuterVolumeSpecName: "kube-api-access-jtgkd") pod "b49afeaf-b456-453e-899d-8fccce0a72b9" (UID: "b49afeaf-b456-453e-899d-8fccce0a72b9"). InnerVolumeSpecName "kube-api-access-jtgkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.351679 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b49afeaf-b456-453e-899d-8fccce0a72b9" (UID: "b49afeaf-b456-453e-899d-8fccce0a72b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.401388 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.401448 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtgkd\" (UniqueName: \"kubernetes.io/projected/b49afeaf-b456-453e-899d-8fccce0a72b9-kube-api-access-jtgkd\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.401464 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49afeaf-b456-453e-899d-8fccce0a72b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.428557 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.436314 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.446140 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502678 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content\") pod \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502748 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9pfc\" (UniqueName: \"kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc\") pod \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502805 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities\") pod \"78a838b3-595e-4b72-b482-93f22e3cd1a0\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502843 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities\") pod \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502868 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcdwq\" (UniqueName: \"kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq\") pod \"78a838b3-595e-4b72-b482-93f22e3cd1a0\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502939 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content\") pod \"78a838b3-595e-4b72-b482-93f22e3cd1a0\" (UID: \"78a838b3-595e-4b72-b482-93f22e3cd1a0\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.502987 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities\") pod \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.503017 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86xmh\" (UniqueName: \"kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh\") pod \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\" (UID: \"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.503094 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content\") pod \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\" (UID: \"01c0c130-15b5-40ed-b1c9-2d4a979a5953\") " Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.503859 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities" (OuterVolumeSpecName: "utilities") pod "78a838b3-595e-4b72-b482-93f22e3cd1a0" (UID: "78a838b3-595e-4b72-b482-93f22e3cd1a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.504166 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities" (OuterVolumeSpecName: "utilities") pod "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" (UID: "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.504497 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities" (OuterVolumeSpecName: "utilities") pod "01c0c130-15b5-40ed-b1c9-2d4a979a5953" (UID: "01c0c130-15b5-40ed-b1c9-2d4a979a5953"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.508250 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc" (OuterVolumeSpecName: "kube-api-access-d9pfc") pod "01c0c130-15b5-40ed-b1c9-2d4a979a5953" (UID: "01c0c130-15b5-40ed-b1c9-2d4a979a5953"). InnerVolumeSpecName "kube-api-access-d9pfc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.508273 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq" (OuterVolumeSpecName: "kube-api-access-wcdwq") pod "78a838b3-595e-4b72-b482-93f22e3cd1a0" (UID: "78a838b3-595e-4b72-b482-93f22e3cd1a0"). InnerVolumeSpecName "kube-api-access-wcdwq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.508256 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh" (OuterVolumeSpecName: "kube-api-access-86xmh") pod "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" (UID: "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8"). InnerVolumeSpecName "kube-api-access-86xmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.522193 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a838b3-595e-4b72-b482-93f22e3cd1a0" (UID: "78a838b3-595e-4b72-b482-93f22e3cd1a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.540454 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01c0c130-15b5-40ed-b1c9-2d4a979a5953" (UID: "01c0c130-15b5-40ed-b1c9-2d4a979a5953"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.594491 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j9nvj"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604030 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9pfc\" (UniqueName: \"kubernetes.io/projected/01c0c130-15b5-40ed-b1c9-2d4a979a5953-kube-api-access-d9pfc\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604055 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604065 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604088 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wcdwq\" (UniqueName: \"kubernetes.io/projected/78a838b3-595e-4b72-b482-93f22e3cd1a0-kube-api-access-wcdwq\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604099 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a838b3-595e-4b72-b482-93f22e3cd1a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604112 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604122 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-86xmh\" (UniqueName: \"kubernetes.io/projected/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-kube-api-access-86xmh\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.604132 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c0c130-15b5-40ed-b1c9-2d4a979a5953-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.609559 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" (UID: "2ddb4692-b755-4e0e-8c84-3e3c0440c3e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.705512 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.811620 5122 generic.go:358] "Generic (PLEG): container finished" podID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerID="256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162" exitCode=0 Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.811681 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerDied","Data":"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.811705 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn26" event={"ID":"b49afeaf-b456-453e-899d-8fccce0a72b9","Type":"ContainerDied","Data":"7cc08e080ecdeda146d960c0a0ea4f3a29f78db047bf524fbea1db3c808401ec"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.811721 5122 scope.go:117] "RemoveContainer" containerID="256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.811886 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn26" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.817193 5122 generic.go:358] "Generic (PLEG): container finished" podID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerID="6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19" exitCode=0 Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.817299 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerDied","Data":"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.817326 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" event={"ID":"1f5902ff-7a31-4f4d-bc37-fd77aa5714f1","Type":"ContainerDied","Data":"88566cd16bbc2bb5c4ea700adc8d078ccbc82f5da0bde2d8891392ef3aaa0324"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.817392 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5xl2l" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.821811 5122 generic.go:358] "Generic (PLEG): container finished" podID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerID="3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216" exitCode=0 Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.821976 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerDied","Data":"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.822057 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhfb" event={"ID":"2ddb4692-b755-4e0e-8c84-3e3c0440c3e8","Type":"ContainerDied","Data":"2b283d05f4c57ff60a600de34928d5039f7718e6d69af5aa666ca840201eeaeb"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.822009 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhfb" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.825884 5122 generic.go:358] "Generic (PLEG): container finished" podID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerID="1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e" exitCode=0 Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.826022 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5844" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.826020 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerDied","Data":"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.826253 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5844" event={"ID":"01c0c130-15b5-40ed-b1c9-2d4a979a5953","Type":"ContainerDied","Data":"6f32d0cf4a7eb7ff8deb7cef5a1a0fd0bc2edde911438e9d51f9c6d08585d6e8"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.827475 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" event={"ID":"64eca3ba-8cf7-44c8-9c06-302240cb10d9","Type":"ContainerStarted","Data":"aa8ae09f2aa0c0a446be9d61953ac58df95765631a4a158e012fb33645b31af7"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.831062 5122 generic.go:358] "Generic (PLEG): container finished" podID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerID="aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2" exitCode=0 Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.831112 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerDied","Data":"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.831129 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpk76" event={"ID":"78a838b3-595e-4b72-b482-93f22e3cd1a0","Type":"ContainerDied","Data":"2694aaddad634dd5dd2c013a7d297be14574fe9f69fcc8822c09ea0e5fcf19a0"} Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.831193 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpk76" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.844461 5122 scope.go:117] "RemoveContainer" containerID="3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.876094 5122 scope.go:117] "RemoveContainer" containerID="645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.885323 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.893295 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4rhfb"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.896573 5122 scope.go:117] "RemoveContainer" containerID="256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.896730 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:14:31 crc kubenswrapper[5122]: E0224 00:14:31.896957 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162\": container with ID starting with 256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162 not found: ID does not exist" containerID="256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.897009 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162"} err="failed to get container status \"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162\": rpc error: code = NotFound desc = could not find container \"256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162\": container with ID starting with 256e9fbde101312e373c27336bb2c4ff3dda9f39f13a5f63ebc9a96d52c8d162 not found: ID does not exist" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.897035 5122 scope.go:117] "RemoveContainer" containerID="3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef" Feb 24 00:14:31 crc kubenswrapper[5122]: E0224 00:14:31.897917 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef\": container with ID starting with 3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef not found: ID does not exist" containerID="3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.897954 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef"} err="failed to get container status \"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef\": rpc error: code = NotFound desc = could not find container \"3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef\": container with ID starting with 3196800fa0bd36295bc64ccb5ea3cf46b9c49149433c9a247de1b8a6258b8cef not found: ID does not exist" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.897988 5122 scope.go:117] "RemoveContainer" containerID="645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e" Feb 24 00:14:31 crc kubenswrapper[5122]: E0224 00:14:31.898870 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e\": container with ID starting with 645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e not found: ID does not exist" containerID="645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.898898 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e"} err="failed to get container status \"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e\": rpc error: code = NotFound desc = could not find container \"645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e\": container with ID starting with 645dfc88a9faa9bc2a63a984a315b93c27ed21b12b22e65675258da523af5c4e not found: ID does not exist" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.898915 5122 scope.go:117] "RemoveContainer" containerID="6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.899609 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvn26"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.912099 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.920996 5122 scope.go:117] "RemoveContainer" containerID="a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.922174 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d5844"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.933911 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.938134 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpk76"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.941480 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.946790 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5xl2l"] Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.958303 5122 scope.go:117] "RemoveContainer" containerID="6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19" Feb 24 00:14:31 crc kubenswrapper[5122]: E0224 00:14:31.958900 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19\": container with ID starting with 6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19 not found: ID does not exist" containerID="6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.958943 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19"} err="failed to get container status \"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19\": rpc error: code = NotFound desc = could not find container \"6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19\": container with ID starting with 6eca1d726aa22b4afea1591ac7b5688041f0fb2e89aa665f14dbee8cb15d1c19 not found: ID does not exist" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.958971 5122 scope.go:117] "RemoveContainer" containerID="a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e" Feb 24 00:14:31 crc kubenswrapper[5122]: E0224 00:14:31.959395 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e\": container with ID starting with a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e not found: ID does not exist" containerID="a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.959445 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e"} err="failed to get container status \"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e\": rpc error: code = NotFound desc = could not find container \"a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e\": container with ID starting with a7ee4b1baa3882cda607155ce81f8ab91cda5f20b6fa931bacf6c511cb42962e not found: ID does not exist" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.959477 5122 scope.go:117] "RemoveContainer" containerID="3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.971095 5122 scope.go:117] "RemoveContainer" containerID="a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e" Feb 24 00:14:31 crc kubenswrapper[5122]: I0224 00:14:31.985913 5122 scope.go:117] "RemoveContainer" containerID="aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.000050 5122 scope.go:117] "RemoveContainer" containerID="3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.000448 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216\": container with ID starting with 3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216 not found: ID does not exist" containerID="3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.000554 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216"} err="failed to get container status \"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216\": rpc error: code = NotFound desc = could not find container \"3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216\": container with ID starting with 3736722343a497cbba07ac1db3416712523114d7c3efc3a3ea7650877738d216 not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.000657 5122 scope.go:117] "RemoveContainer" containerID="a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.001052 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e\": container with ID starting with a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e not found: ID does not exist" containerID="a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.001153 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e"} err="failed to get container status \"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e\": rpc error: code = NotFound desc = could not find container \"a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e\": container with ID starting with a61884c333d3834b6e96427625d35e357e68d40fcaad7290b725ebceeabc8d7e not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.001236 5122 scope.go:117] "RemoveContainer" containerID="aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.001506 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425\": container with ID starting with aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425 not found: ID does not exist" containerID="aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.001605 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425"} err="failed to get container status \"aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425\": rpc error: code = NotFound desc = could not find container \"aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425\": container with ID starting with aa0a24a12a17ff03d2b2b7b2b102e370bb45516815a379b930e4682c6c736425 not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.001689 5122 scope.go:117] "RemoveContainer" containerID="1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.018108 5122 scope.go:117] "RemoveContainer" containerID="61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.031627 5122 scope.go:117] "RemoveContainer" containerID="757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.047890 5122 scope.go:117] "RemoveContainer" containerID="1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.048563 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e\": container with ID starting with 1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e not found: ID does not exist" containerID="1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.048605 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e"} err="failed to get container status \"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e\": rpc error: code = NotFound desc = could not find container \"1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e\": container with ID starting with 1dae4300713647e6a426ddbd74d8585b0d26cca313d3b4b3cdeeda2264e6c27e not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.048636 5122 scope.go:117] "RemoveContainer" containerID="61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.049805 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405\": container with ID starting with 61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405 not found: ID does not exist" containerID="61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.049839 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405"} err="failed to get container status \"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405\": rpc error: code = NotFound desc = could not find container \"61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405\": container with ID starting with 61bba79cfa1f8b86c762e95e8fb4a142659ad877f35c162871499d1e01088405 not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.049859 5122 scope.go:117] "RemoveContainer" containerID="757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.050221 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca\": container with ID starting with 757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca not found: ID does not exist" containerID="757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.050247 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca"} err="failed to get container status \"757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca\": rpc error: code = NotFound desc = could not find container \"757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca\": container with ID starting with 757d04d1df68a61c813111874f48ebdeb1447f8cc66c76c1aacd860c8dcf38ca not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.050263 5122 scope.go:117] "RemoveContainer" containerID="aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.063854 5122 scope.go:117] "RemoveContainer" containerID="4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.077272 5122 scope.go:117] "RemoveContainer" containerID="fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.093185 5122 scope.go:117] "RemoveContainer" containerID="aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.093649 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2\": container with ID starting with aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2 not found: ID does not exist" containerID="aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.093710 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2"} err="failed to get container status \"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2\": rpc error: code = NotFound desc = could not find container \"aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2\": container with ID starting with aa91f74adec92e4466ce75bc25de3491eabeb87932b2fab5740786eb79fe4dd2 not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.093752 5122 scope.go:117] "RemoveContainer" containerID="4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.094181 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4\": container with ID starting with 4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4 not found: ID does not exist" containerID="4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.094230 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4"} err="failed to get container status \"4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4\": rpc error: code = NotFound desc = could not find container \"4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4\": container with ID starting with 4a52831f8e7ea01e509f3166561e6ebb0d7a70d524f2e4eb5c949bf123766db4 not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.094263 5122 scope.go:117] "RemoveContainer" containerID="fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da" Feb 24 00:14:32 crc kubenswrapper[5122]: E0224 00:14:32.094554 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da\": container with ID starting with fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da not found: ID does not exist" containerID="fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.094587 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da"} err="failed to get container status \"fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da\": rpc error: code = NotFound desc = could not find container \"fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da\": container with ID starting with fb1f85ca33e288d4d38db2f65aa8d9ec9b8339d57d6dd7ef902631871daa29da not found: ID does not exist" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.842348 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" event={"ID":"64eca3ba-8cf7-44c8-9c06-302240cb10d9","Type":"ContainerStarted","Data":"81b451f4ad9180cd71b92128f653da83d0cf86c3acbb3b2e834f8a463b35bf49"} Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.842733 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.847599 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.863645 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-j9nvj" podStartSLOduration=2.863631837 podStartE2EDuration="2.863631837s" podCreationTimestamp="2026-02-24 00:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:14:32.862691881 +0000 UTC m=+339.952146414" watchObservedRunningTime="2026-02-24 00:14:32.863631837 +0000 UTC m=+339.953086350" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.965693 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966321 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966342 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966356 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966363 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966375 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966384 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966394 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966402 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966411 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966417 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966430 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966437 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966446 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966452 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966459 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966464 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966473 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966478 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966488 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966493 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966503 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966510 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="extract-content" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966520 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966527 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966549 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966556 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="extract-utilities" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966643 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966656 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966693 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966703 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" containerName="registry-server" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966711 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966718 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966802 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:32 crc kubenswrapper[5122]: I0224 00:14:32.966809 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" containerName="marketplace-operator" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.084893 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.084976 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.087939 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.122544 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.122598 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.122648 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqb8\" (UniqueName: \"kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.160949 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zxkvk"] Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.204519 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxkvk"] Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.204668 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.207028 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223453 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-catalog-content\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223503 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqb8\" (UniqueName: \"kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223595 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223636 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrzq\" (UniqueName: \"kubernetes.io/projected/83b27a8c-4814-4cea-b395-be2e22807da6-kube-api-access-fhrzq\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223672 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.223727 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-utilities\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.224187 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.224235 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.242459 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqb8\" (UniqueName: \"kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8\") pod \"redhat-marketplace-m4j57\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.324787 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-utilities\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.324843 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-catalog-content\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.324913 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrzq\" (UniqueName: \"kubernetes.io/projected/83b27a8c-4814-4cea-b395-be2e22807da6-kube-api-access-fhrzq\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.325672 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-utilities\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.325964 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b27a8c-4814-4cea-b395-be2e22807da6-catalog-content\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.341697 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrzq\" (UniqueName: \"kubernetes.io/projected/83b27a8c-4814-4cea-b395-be2e22807da6-kube-api-access-fhrzq\") pod \"certified-operators-zxkvk\" (UID: \"83b27a8c-4814-4cea-b395-be2e22807da6\") " pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.406699 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.523290 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.787063 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c0c130-15b5-40ed-b1c9-2d4a979a5953" path="/var/lib/kubelet/pods/01c0c130-15b5-40ed-b1c9-2d4a979a5953/volumes" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.788507 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5902ff-7a31-4f4d-bc37-fd77aa5714f1" path="/var/lib/kubelet/pods/1f5902ff-7a31-4f4d-bc37-fd77aa5714f1/volumes" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.789875 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ddb4692-b755-4e0e-8c84-3e3c0440c3e8" path="/var/lib/kubelet/pods/2ddb4692-b755-4e0e-8c84-3e3c0440c3e8/volumes" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.794475 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a838b3-595e-4b72-b482-93f22e3cd1a0" path="/var/lib/kubelet/pods/78a838b3-595e-4b72-b482-93f22e3cd1a0/volumes" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.795316 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49afeaf-b456-453e-899d-8fccce0a72b9" path="/var/lib/kubelet/pods/b49afeaf-b456-453e-899d-8fccce0a72b9/volumes" Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.798793 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:14:33 crc kubenswrapper[5122]: W0224 00:14:33.803127 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97f6def_aff6_4a05_862a_959aa7b87606.slice/crio-308f66a4108a62ef53f551537dcc2204654915c7194f475ab5f111fa5d6ded80 WatchSource:0}: Error finding container 308f66a4108a62ef53f551537dcc2204654915c7194f475ab5f111fa5d6ded80: Status 404 returned error can't find the container with id 308f66a4108a62ef53f551537dcc2204654915c7194f475ab5f111fa5d6ded80 Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.857935 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerStarted","Data":"308f66a4108a62ef53f551537dcc2204654915c7194f475ab5f111fa5d6ded80"} Feb 24 00:14:33 crc kubenswrapper[5122]: I0224 00:14:33.946403 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxkvk"] Feb 24 00:14:34 crc kubenswrapper[5122]: I0224 00:14:34.867597 5122 generic.go:358] "Generic (PLEG): container finished" podID="83b27a8c-4814-4cea-b395-be2e22807da6" containerID="5d27d1da59822f76fc649a36672d64c36cd71f0599a09cd44f9d203c1273a312" exitCode=0 Feb 24 00:14:34 crc kubenswrapper[5122]: I0224 00:14:34.868014 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxkvk" event={"ID":"83b27a8c-4814-4cea-b395-be2e22807da6","Type":"ContainerDied","Data":"5d27d1da59822f76fc649a36672d64c36cd71f0599a09cd44f9d203c1273a312"} Feb 24 00:14:34 crc kubenswrapper[5122]: I0224 00:14:34.868052 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxkvk" event={"ID":"83b27a8c-4814-4cea-b395-be2e22807da6","Type":"ContainerStarted","Data":"634b679c5cf2fc9c4e50f12f29c1270dcd94e76a5b1db3f919715a65ef6a630b"} Feb 24 00:14:34 crc kubenswrapper[5122]: I0224 00:14:34.870106 5122 generic.go:358] "Generic (PLEG): container finished" podID="a97f6def-aff6-4a05-862a-959aa7b87606" containerID="13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4" exitCode=0 Feb 24 00:14:34 crc kubenswrapper[5122]: I0224 00:14:34.870190 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerDied","Data":"13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4"} Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.101043 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpcq8"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.183909 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpcq8"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.184114 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252548 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b2351ea-5139-488e-8455-284e3049f511-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252659 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2jmb\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-kube-api-access-z2jmb\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252703 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252771 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252796 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b2351ea-5139-488e-8455-284e3049f511-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252861 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252909 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.252954 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-registry-tls\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.280971 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.357775 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.357832 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b2351ea-5139-488e-8455-284e3049f511-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.357869 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.357905 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.357946 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-registry-tls\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.358004 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b2351ea-5139-488e-8455-284e3049f511-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.358032 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2jmb\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-kube-api-access-z2jmb\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.359500 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2b2351ea-5139-488e-8455-284e3049f511-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.359627 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.360654 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2b2351ea-5139-488e-8455-284e3049f511-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.368653 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4t9rq"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.371990 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-registry-tls\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.377864 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2b2351ea-5139-488e-8455-284e3049f511-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.386524 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.394255 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2jmb\" (UniqueName: \"kubernetes.io/projected/2b2351ea-5139-488e-8455-284e3049f511-kube-api-access-z2jmb\") pod \"image-registry-5d9d95bf5b-cpcq8\" (UID: \"2b2351ea-5139-488e-8455-284e3049f511\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.424966 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4t9rq"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.425143 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.426851 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.459271 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-catalog-content\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.459354 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-utilities\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.459463 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdssh\" (UniqueName: \"kubernetes.io/projected/127102d8-1da1-4582-9512-75958969764b-kube-api-access-kdssh\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.504173 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.561432 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdssh\" (UniqueName: \"kubernetes.io/projected/127102d8-1da1-4582-9512-75958969764b-kube-api-access-kdssh\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.561547 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-catalog-content\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.561591 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-utilities\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.562064 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-utilities\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.562627 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127102d8-1da1-4582-9512-75958969764b-catalog-content\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.568570 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8cprn"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.579518 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdssh\" (UniqueName: \"kubernetes.io/projected/127102d8-1da1-4582-9512-75958969764b-kube-api-access-kdssh\") pod \"community-operators-4t9rq\" (UID: \"127102d8-1da1-4582-9512-75958969764b\") " pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.712946 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cprn"] Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.713169 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.715254 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.743300 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.763730 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4gt\" (UniqueName: \"kubernetes.io/projected/120593d4-24fb-4884-9aa1-ba609c88f3c5-kube-api-access-jp4gt\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.763780 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-utilities\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.763885 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-catalog-content\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.865282 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-catalog-content\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.865742 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-catalog-content\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.865807 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4gt\" (UniqueName: \"kubernetes.io/projected/120593d4-24fb-4884-9aa1-ba609c88f3c5-kube-api-access-jp4gt\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.865916 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-utilities\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.866465 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120593d4-24fb-4884-9aa1-ba609c88f3c5-utilities\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.886925 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4gt\" (UniqueName: \"kubernetes.io/projected/120593d4-24fb-4884-9aa1-ba609c88f3c5-kube-api-access-jp4gt\") pod \"redhat-operators-8cprn\" (UID: \"120593d4-24fb-4884-9aa1-ba609c88f3c5\") " pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:35 crc kubenswrapper[5122]: I0224 00:14:35.915597 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpcq8"] Feb 24 00:14:35 crc kubenswrapper[5122]: W0224 00:14:35.932214 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2351ea_5139_488e_8455_284e3049f511.slice/crio-5fa60ec3904edafac7daa912320d1c731e9f008412de99693002b22ac514c500 WatchSource:0}: Error finding container 5fa60ec3904edafac7daa912320d1c731e9f008412de99693002b22ac514c500: Status 404 returned error can't find the container with id 5fa60ec3904edafac7daa912320d1c731e9f008412de99693002b22ac514c500 Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.030083 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.174621 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4t9rq"] Feb 24 00:14:36 crc kubenswrapper[5122]: W0224 00:14:36.188330 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127102d8_1da1_4582_9512_75958969764b.slice/crio-a764783e7af04a66b5537c2c6a60dab273976a14b5c4df5d3038dbc1559261f4 WatchSource:0}: Error finding container a764783e7af04a66b5537c2c6a60dab273976a14b5c4df5d3038dbc1559261f4: Status 404 returned error can't find the container with id a764783e7af04a66b5537c2c6a60dab273976a14b5c4df5d3038dbc1559261f4 Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.883138 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t9rq" event={"ID":"127102d8-1da1-4582-9512-75958969764b","Type":"ContainerStarted","Data":"a764783e7af04a66b5537c2c6a60dab273976a14b5c4df5d3038dbc1559261f4"} Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.885624 5122 generic.go:358] "Generic (PLEG): container finished" podID="a97f6def-aff6-4a05-862a-959aa7b87606" containerID="586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7" exitCode=0 Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.885730 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerDied","Data":"586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7"} Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.888472 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" event={"ID":"2b2351ea-5139-488e-8455-284e3049f511","Type":"ContainerStarted","Data":"dab0dad246fa757f09f091a1fbfe8b21d75ce8a83d9de9fcfb6f6a6af7c5d503"} Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.888506 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" event={"ID":"2b2351ea-5139-488e-8455-284e3049f511","Type":"ContainerStarted","Data":"5fa60ec3904edafac7daa912320d1c731e9f008412de99693002b22ac514c500"} Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.927582 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:36 crc kubenswrapper[5122]: I0224 00:14:36.963623 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" podStartSLOduration=1.9636004630000001 podStartE2EDuration="1.963600463s" podCreationTimestamp="2026-02-24 00:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:14:36.949854095 +0000 UTC m=+344.039308688" watchObservedRunningTime="2026-02-24 00:14:36.963600463 +0000 UTC m=+344.053054976" Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.362191 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cprn"] Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.894812 5122 generic.go:358] "Generic (PLEG): container finished" podID="83b27a8c-4814-4cea-b395-be2e22807da6" containerID="205a1c4c3f844fae3fef438b67ef3ea75ebf7477f80c3e239778dd5ac7987efa" exitCode=0 Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.894926 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxkvk" event={"ID":"83b27a8c-4814-4cea-b395-be2e22807da6","Type":"ContainerDied","Data":"205a1c4c3f844fae3fef438b67ef3ea75ebf7477f80c3e239778dd5ac7987efa"} Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.897954 5122 generic.go:358] "Generic (PLEG): container finished" podID="127102d8-1da1-4582-9512-75958969764b" containerID="ffd6f1f57a2b9d573b192973ad1ecee0fb6ab1456ed6158e0867e8099762cde9" exitCode=0 Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.898047 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t9rq" event={"ID":"127102d8-1da1-4582-9512-75958969764b","Type":"ContainerDied","Data":"ffd6f1f57a2b9d573b192973ad1ecee0fb6ab1456ed6158e0867e8099762cde9"} Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.899309 5122 generic.go:358] "Generic (PLEG): container finished" podID="120593d4-24fb-4884-9aa1-ba609c88f3c5" containerID="aed1fe5a1ad67ba10976958240871b5a1730df26673180c5b6640fddfcad16c7" exitCode=0 Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.899432 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cprn" event={"ID":"120593d4-24fb-4884-9aa1-ba609c88f3c5","Type":"ContainerDied","Data":"aed1fe5a1ad67ba10976958240871b5a1730df26673180c5b6640fddfcad16c7"} Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.899486 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cprn" event={"ID":"120593d4-24fb-4884-9aa1-ba609c88f3c5","Type":"ContainerStarted","Data":"b9de0a767767cfa627a0519207b575abb70e66aac8cea67ab7b84da81868b885"} Feb 24 00:14:37 crc kubenswrapper[5122]: I0224 00:14:37.903642 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerStarted","Data":"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e"} Feb 24 00:14:38 crc kubenswrapper[5122]: I0224 00:14:38.125606 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4j57" podStartSLOduration=5.086503348 podStartE2EDuration="6.125590369s" podCreationTimestamp="2026-02-24 00:14:32 +0000 UTC" firstStartedPulling="2026-02-24 00:14:34.870793908 +0000 UTC m=+341.960248411" lastFinishedPulling="2026-02-24 00:14:35.909880919 +0000 UTC m=+342.999335432" observedRunningTime="2026-02-24 00:14:38.11945251 +0000 UTC m=+345.208907053" watchObservedRunningTime="2026-02-24 00:14:38.125590369 +0000 UTC m=+345.215044882" Feb 24 00:14:38 crc kubenswrapper[5122]: I0224 00:14:38.910313 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxkvk" event={"ID":"83b27a8c-4814-4cea-b395-be2e22807da6","Type":"ContainerStarted","Data":"f900da2a62fe40eb58bdacad0af36a18443bf4dc117378d0c6a85200591cc78d"} Feb 24 00:14:38 crc kubenswrapper[5122]: I0224 00:14:38.931302 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zxkvk" podStartSLOduration=3.853904328 podStartE2EDuration="5.931274949s" podCreationTimestamp="2026-02-24 00:14:33 +0000 UTC" firstStartedPulling="2026-02-24 00:14:34.869864642 +0000 UTC m=+341.959319155" lastFinishedPulling="2026-02-24 00:14:36.947235263 +0000 UTC m=+344.036689776" observedRunningTime="2026-02-24 00:14:38.926248721 +0000 UTC m=+346.015703244" watchObservedRunningTime="2026-02-24 00:14:38.931274949 +0000 UTC m=+346.020729502" Feb 24 00:14:39 crc kubenswrapper[5122]: I0224 00:14:39.917206 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cprn" event={"ID":"120593d4-24fb-4884-9aa1-ba609c88f3c5","Type":"ContainerStarted","Data":"2a0153697e46d09e31623537f9414e1dcf658d2969b87303084c3eaa61991705"} Feb 24 00:14:40 crc kubenswrapper[5122]: I0224 00:14:40.923932 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t9rq" event={"ID":"127102d8-1da1-4582-9512-75958969764b","Type":"ContainerStarted","Data":"0c4f051db6a414bbd39eb58422052cd9b853dc8ab1fca4d57c2db2e63259f42a"} Feb 24 00:14:40 crc kubenswrapper[5122]: I0224 00:14:40.928217 5122 generic.go:358] "Generic (PLEG): container finished" podID="120593d4-24fb-4884-9aa1-ba609c88f3c5" containerID="2a0153697e46d09e31623537f9414e1dcf658d2969b87303084c3eaa61991705" exitCode=0 Feb 24 00:14:40 crc kubenswrapper[5122]: I0224 00:14:40.928504 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cprn" event={"ID":"120593d4-24fb-4884-9aa1-ba609c88f3c5","Type":"ContainerDied","Data":"2a0153697e46d09e31623537f9414e1dcf658d2969b87303084c3eaa61991705"} Feb 24 00:14:41 crc kubenswrapper[5122]: I0224 00:14:41.934011 5122 generic.go:358] "Generic (PLEG): container finished" podID="127102d8-1da1-4582-9512-75958969764b" containerID="0c4f051db6a414bbd39eb58422052cd9b853dc8ab1fca4d57c2db2e63259f42a" exitCode=0 Feb 24 00:14:41 crc kubenswrapper[5122]: I0224 00:14:41.934330 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t9rq" event={"ID":"127102d8-1da1-4582-9512-75958969764b","Type":"ContainerDied","Data":"0c4f051db6a414bbd39eb58422052cd9b853dc8ab1fca4d57c2db2e63259f42a"} Feb 24 00:14:41 crc kubenswrapper[5122]: I0224 00:14:41.939833 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cprn" event={"ID":"120593d4-24fb-4884-9aa1-ba609c88f3c5","Type":"ContainerStarted","Data":"4a5d0da7ed7cff92021526f8023f4d1e56aa1678046c36f3a07f1a9f8ed0259a"} Feb 24 00:14:42 crc kubenswrapper[5122]: I0224 00:14:42.002622 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8cprn" podStartSLOduration=5.749773169 podStartE2EDuration="7.002601391s" podCreationTimestamp="2026-02-24 00:14:35 +0000 UTC" firstStartedPulling="2026-02-24 00:14:37.899814328 +0000 UTC m=+344.989268841" lastFinishedPulling="2026-02-24 00:14:39.15264255 +0000 UTC m=+346.242097063" observedRunningTime="2026-02-24 00:14:41.999465495 +0000 UTC m=+349.088920018" watchObservedRunningTime="2026-02-24 00:14:42.002601391 +0000 UTC m=+349.092055904" Feb 24 00:14:42 crc kubenswrapper[5122]: I0224 00:14:42.946766 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t9rq" event={"ID":"127102d8-1da1-4582-9512-75958969764b","Type":"ContainerStarted","Data":"9c1872dbaeb2b85185fade9945aac2c091175f33237a39069fa8e49e6b9151d4"} Feb 24 00:14:42 crc kubenswrapper[5122]: I0224 00:14:42.968448 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4t9rq" podStartSLOduration=5.400809843 podStartE2EDuration="7.96842328s" podCreationTimestamp="2026-02-24 00:14:35 +0000 UTC" firstStartedPulling="2026-02-24 00:14:37.899258942 +0000 UTC m=+344.988713485" lastFinishedPulling="2026-02-24 00:14:40.466872419 +0000 UTC m=+347.556326922" observedRunningTime="2026-02-24 00:14:42.96113426 +0000 UTC m=+350.050588803" watchObservedRunningTime="2026-02-24 00:14:42.96842328 +0000 UTC m=+350.057877803" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.406983 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.407304 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.453689 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.524221 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.524935 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:43 crc kubenswrapper[5122]: I0224 00:14:43.565861 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:44 crc kubenswrapper[5122]: I0224 00:14:44.006248 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:14:44 crc kubenswrapper[5122]: I0224 00:14:44.011213 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zxkvk" Feb 24 00:14:45 crc kubenswrapper[5122]: I0224 00:14:45.743653 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:45 crc kubenswrapper[5122]: I0224 00:14:45.743896 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:45 crc kubenswrapper[5122]: I0224 00:14:45.789636 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:46 crc kubenswrapper[5122]: I0224 00:14:46.031181 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:46 crc kubenswrapper[5122]: I0224 00:14:46.031507 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:47 crc kubenswrapper[5122]: I0224 00:14:47.071463 5122 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8cprn" podUID="120593d4-24fb-4884-9aa1-ba609c88f3c5" containerName="registry-server" probeResult="failure" output=< Feb 24 00:14:47 crc kubenswrapper[5122]: timeout: failed to connect service ":50051" within 1s Feb 24 00:14:47 crc kubenswrapper[5122]: > Feb 24 00:14:56 crc kubenswrapper[5122]: I0224 00:14:56.013674 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4t9rq" Feb 24 00:14:56 crc kubenswrapper[5122]: I0224 00:14:56.068441 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:56 crc kubenswrapper[5122]: I0224 00:14:56.106005 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8cprn" Feb 24 00:14:58 crc kubenswrapper[5122]: I0224 00:14:58.100315 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpcq8" Feb 24 00:14:58 crc kubenswrapper[5122]: I0224 00:14:58.179132 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.136727 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b"] Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.169235 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b"] Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.169381 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.171914 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.171961 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.293396 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.293461 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9lk\" (UniqueName: \"kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.293519 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.394855 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.394914 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9lk\" (UniqueName: \"kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.394970 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.395631 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.402877 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.422346 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9lk\" (UniqueName: \"kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk\") pod \"collect-profiles-29531535-kqs6b\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.485493 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:00 crc kubenswrapper[5122]: I0224 00:15:00.872472 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b"] Feb 24 00:15:00 crc kubenswrapper[5122]: W0224 00:15:00.877311 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d88969c_a3f9_4f4d_a93b_112bd268834e.slice/crio-85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889 WatchSource:0}: Error finding container 85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889: Status 404 returned error can't find the container with id 85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889 Feb 24 00:15:01 crc kubenswrapper[5122]: I0224 00:15:01.045724 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" event={"ID":"8d88969c-a3f9-4f4d-a93b-112bd268834e","Type":"ContainerStarted","Data":"ead9e9496532ed224524c2d4c646c4e640f15144ecfafe7453f411d0b43b38aa"} Feb 24 00:15:01 crc kubenswrapper[5122]: I0224 00:15:01.046228 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" event={"ID":"8d88969c-a3f9-4f4d-a93b-112bd268834e","Type":"ContainerStarted","Data":"85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889"} Feb 24 00:15:01 crc kubenswrapper[5122]: I0224 00:15:01.065003 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" podStartSLOduration=1.064985087 podStartE2EDuration="1.064985087s" podCreationTimestamp="2026-02-24 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:15:01.062471767 +0000 UTC m=+368.151926350" watchObservedRunningTime="2026-02-24 00:15:01.064985087 +0000 UTC m=+368.154439600" Feb 24 00:15:02 crc kubenswrapper[5122]: I0224 00:15:02.052554 5122 generic.go:358] "Generic (PLEG): container finished" podID="8d88969c-a3f9-4f4d-a93b-112bd268834e" containerID="ead9e9496532ed224524c2d4c646c4e640f15144ecfafe7453f411d0b43b38aa" exitCode=0 Feb 24 00:15:02 crc kubenswrapper[5122]: I0224 00:15:02.052611 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" event={"ID":"8d88969c-a3f9-4f4d-a93b-112bd268834e","Type":"ContainerDied","Data":"ead9e9496532ed224524c2d4c646c4e640f15144ecfafe7453f411d0b43b38aa"} Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.317544 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.435536 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b9lk\" (UniqueName: \"kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk\") pod \"8d88969c-a3f9-4f4d-a93b-112bd268834e\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.435631 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume\") pod \"8d88969c-a3f9-4f4d-a93b-112bd268834e\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.435745 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume\") pod \"8d88969c-a3f9-4f4d-a93b-112bd268834e\" (UID: \"8d88969c-a3f9-4f4d-a93b-112bd268834e\") " Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.441888 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d88969c-a3f9-4f4d-a93b-112bd268834e" (UID: "8d88969c-a3f9-4f4d-a93b-112bd268834e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.444231 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d88969c-a3f9-4f4d-a93b-112bd268834e" (UID: "8d88969c-a3f9-4f4d-a93b-112bd268834e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.444785 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk" (OuterVolumeSpecName: "kube-api-access-8b9lk") pod "8d88969c-a3f9-4f4d-a93b-112bd268834e" (UID: "8d88969c-a3f9-4f4d-a93b-112bd268834e"). InnerVolumeSpecName "kube-api-access-8b9lk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.536698 5122 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d88969c-a3f9-4f4d-a93b-112bd268834e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.536730 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8b9lk\" (UniqueName: \"kubernetes.io/projected/8d88969c-a3f9-4f4d-a93b-112bd268834e-kube-api-access-8b9lk\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:03 crc kubenswrapper[5122]: I0224 00:15:03.536739 5122 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d88969c-a3f9-4f4d-a93b-112bd268834e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:04 crc kubenswrapper[5122]: I0224 00:15:04.068776 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" event={"ID":"8d88969c-a3f9-4f4d-a93b-112bd268834e","Type":"ContainerDied","Data":"85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889"} Feb 24 00:15:04 crc kubenswrapper[5122]: I0224 00:15:04.068801 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-kqs6b" Feb 24 00:15:04 crc kubenswrapper[5122]: I0224 00:15:04.068816 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ef2af911fc5b25e562397c545d5bbb0905899f482efdb7df757818f276d889" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.215112 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" podUID="c246391f-7d72-44c4-be1e-d9c37480d022" containerName="registry" containerID="cri-o://3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517" gracePeriod=30 Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.667312 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680381 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680479 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vb56\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680526 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680576 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680625 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680772 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680809 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.680916 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"c246391f-7d72-44c4-be1e-d9c37480d022\" (UID: \"c246391f-7d72-44c4-be1e-d9c37480d022\") " Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.682212 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.682462 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.688032 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56" (OuterVolumeSpecName: "kube-api-access-4vb56") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "kube-api-access-4vb56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.689437 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.691252 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.694697 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.713339 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.713577 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "c246391f-7d72-44c4-be1e-d9c37480d022" (UID: "c246391f-7d72-44c4-be1e-d9c37480d022"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782398 5122 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c246391f-7d72-44c4-be1e-d9c37480d022-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782433 5122 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782445 5122 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782459 5122 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c246391f-7d72-44c4-be1e-d9c37480d022-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782470 5122 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782480 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vb56\" (UniqueName: \"kubernetes.io/projected/c246391f-7d72-44c4-be1e-d9c37480d022-kube-api-access-4vb56\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:23 crc kubenswrapper[5122]: I0224 00:15:23.782490 5122 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c246391f-7d72-44c4-be1e-d9c37480d022-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.200640 5122 generic.go:358] "Generic (PLEG): container finished" podID="c246391f-7d72-44c4-be1e-d9c37480d022" containerID="3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517" exitCode=0 Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.200701 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" event={"ID":"c246391f-7d72-44c4-be1e-d9c37480d022","Type":"ContainerDied","Data":"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517"} Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.201060 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" event={"ID":"c246391f-7d72-44c4-be1e-d9c37480d022","Type":"ContainerDied","Data":"22309b2113a441f18971da291719bfbb9791a627f05cd1e605315188de831ef1"} Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.200718 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-mkt9k" Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.201102 5122 scope.go:117] "RemoveContainer" containerID="3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517" Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.217521 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.221282 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-mkt9k"] Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.224192 5122 scope.go:117] "RemoveContainer" containerID="3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517" Feb 24 00:15:24 crc kubenswrapper[5122]: E0224 00:15:24.224633 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517\": container with ID starting with 3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517 not found: ID does not exist" containerID="3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517" Feb 24 00:15:24 crc kubenswrapper[5122]: I0224 00:15:24.224666 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517"} err="failed to get container status \"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517\": rpc error: code = NotFound desc = could not find container \"3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517\": container with ID starting with 3c6a347cbfc7735b52724d280fe2101afc3a03ca12ffbd3f76debd936a79e517 not found: ID does not exist" Feb 24 00:15:25 crc kubenswrapper[5122]: I0224 00:15:25.786140 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c246391f-7d72-44c4-be1e-d9c37480d022" path="/var/lib/kubelet/pods/c246391f-7d72-44c4-be1e-d9c37480d022/volumes" Feb 24 00:15:27 crc kubenswrapper[5122]: I0224 00:15:27.116193 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:15:27 crc kubenswrapper[5122]: I0224 00:15:27.116670 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:15:57 crc kubenswrapper[5122]: I0224 00:15:57.116119 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:15:57 crc kubenswrapper[5122]: I0224 00:15:57.116745 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.140451 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531536-r64fm"] Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.141844 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c246391f-7d72-44c4-be1e-d9c37480d022" containerName="registry" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.141861 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="c246391f-7d72-44c4-be1e-d9c37480d022" containerName="registry" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.141895 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d88969c-a3f9-4f4d-a93b-112bd268834e" containerName="collect-profiles" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.141901 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d88969c-a3f9-4f4d-a93b-112bd268834e" containerName="collect-profiles" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.141993 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="c246391f-7d72-44c4-be1e-d9c37480d022" containerName="registry" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.142006 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d88969c-a3f9-4f4d-a93b-112bd268834e" containerName="collect-profiles" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.149014 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531536-r64fm"] Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.149220 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.154476 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.154882 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.155056 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.327866 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rwm\" (UniqueName: \"kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm\") pod \"auto-csr-approver-29531536-r64fm\" (UID: \"91f68066-6c73-4adf-b332-e4c155644702\") " pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.429273 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8rwm\" (UniqueName: \"kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm\") pod \"auto-csr-approver-29531536-r64fm\" (UID: \"91f68066-6c73-4adf-b332-e4c155644702\") " pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.466495 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8rwm\" (UniqueName: \"kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm\") pod \"auto-csr-approver-29531536-r64fm\" (UID: \"91f68066-6c73-4adf-b332-e4c155644702\") " pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.472013 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:00 crc kubenswrapper[5122]: I0224 00:16:00.700616 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531536-r64fm"] Feb 24 00:16:01 crc kubenswrapper[5122]: I0224 00:16:01.505270 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531536-r64fm" event={"ID":"91f68066-6c73-4adf-b332-e4c155644702","Type":"ContainerStarted","Data":"eeaf59d406fe750c97fed8e1760606e4fb3d5b4596c8ca3bf2733367aa49aee0"} Feb 24 00:16:04 crc kubenswrapper[5122]: I0224 00:16:04.317741 5122 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-mvw6p" Feb 24 00:16:04 crc kubenswrapper[5122]: I0224 00:16:04.358788 5122 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-mvw6p" Feb 24 00:16:04 crc kubenswrapper[5122]: I0224 00:16:04.525599 5122 generic.go:358] "Generic (PLEG): container finished" podID="91f68066-6c73-4adf-b332-e4c155644702" containerID="0d0957a3a775c1947f79875d3ae098e7d75ecd8ff6f157a8e21ec6653afc47f6" exitCode=0 Feb 24 00:16:04 crc kubenswrapper[5122]: I0224 00:16:04.525769 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531536-r64fm" event={"ID":"91f68066-6c73-4adf-b332-e4c155644702","Type":"ContainerDied","Data":"0d0957a3a775c1947f79875d3ae098e7d75ecd8ff6f157a8e21ec6653afc47f6"} Feb 24 00:16:05 crc kubenswrapper[5122]: I0224 00:16:05.361755 5122 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-26 00:11:04 +0000 UTC" deadline="2026-03-17 00:13:13.55413072 +0000 UTC" Feb 24 00:16:05 crc kubenswrapper[5122]: I0224 00:16:05.362054 5122 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="503h57m8.192082256s" Feb 24 00:16:05 crc kubenswrapper[5122]: I0224 00:16:05.840889 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:05 crc kubenswrapper[5122]: I0224 00:16:05.930680 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8rwm\" (UniqueName: \"kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm\") pod \"91f68066-6c73-4adf-b332-e4c155644702\" (UID: \"91f68066-6c73-4adf-b332-e4c155644702\") " Feb 24 00:16:05 crc kubenswrapper[5122]: I0224 00:16:05.943769 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm" (OuterVolumeSpecName: "kube-api-access-c8rwm") pod "91f68066-6c73-4adf-b332-e4c155644702" (UID: "91f68066-6c73-4adf-b332-e4c155644702"). InnerVolumeSpecName "kube-api-access-c8rwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.031828 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8rwm\" (UniqueName: \"kubernetes.io/projected/91f68066-6c73-4adf-b332-e4c155644702-kube-api-access-c8rwm\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.362734 5122 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-26 00:11:04 +0000 UTC" deadline="2026-03-22 03:17:40.402280244 +0000 UTC" Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.363140 5122 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="627h1m34.039145656s" Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.540598 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531536-r64fm" Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.540622 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531536-r64fm" event={"ID":"91f68066-6c73-4adf-b332-e4c155644702","Type":"ContainerDied","Data":"eeaf59d406fe750c97fed8e1760606e4fb3d5b4596c8ca3bf2733367aa49aee0"} Feb 24 00:16:06 crc kubenswrapper[5122]: I0224 00:16:06.540678 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeaf59d406fe750c97fed8e1760606e4fb3d5b4596c8ca3bf2733367aa49aee0" Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.115480 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.116188 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.116267 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.117521 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.117611 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f" gracePeriod=600 Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.688636 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f" exitCode=0 Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.688731 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f"} Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.688957 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd"} Feb 24 00:16:27 crc kubenswrapper[5122]: I0224 00:16:27.688979 5122 scope.go:117] "RemoveContainer" containerID="73cf22631bce10f6195cc5bf18e0532829e23827e5caef8d4c7a64bb33e6728b" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.145843 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531538-ch8p9"] Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.146958 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91f68066-6c73-4adf-b332-e4c155644702" containerName="oc" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.146977 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f68066-6c73-4adf-b332-e4c155644702" containerName="oc" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.147062 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="91f68066-6c73-4adf-b332-e4c155644702" containerName="oc" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.152159 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531538-ch8p9"] Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.152250 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.154350 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.155522 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.156111 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.212555 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfs4\" (UniqueName: \"kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4\") pod \"auto-csr-approver-29531538-ch8p9\" (UID: \"f77aac90-1868-4c57-8629-c69449252bd9\") " pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.314136 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfs4\" (UniqueName: \"kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4\") pod \"auto-csr-approver-29531538-ch8p9\" (UID: \"f77aac90-1868-4c57-8629-c69449252bd9\") " pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.348480 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfs4\" (UniqueName: \"kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4\") pod \"auto-csr-approver-29531538-ch8p9\" (UID: \"f77aac90-1868-4c57-8629-c69449252bd9\") " pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.470983 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:00 crc kubenswrapper[5122]: I0224 00:18:00.816951 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531538-ch8p9"] Feb 24 00:18:01 crc kubenswrapper[5122]: I0224 00:18:01.356106 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" event={"ID":"f77aac90-1868-4c57-8629-c69449252bd9","Type":"ContainerStarted","Data":"db380caba0209753f7ebe444344363b7085a4131cace6b9bdb24bb27910f6d79"} Feb 24 00:18:02 crc kubenswrapper[5122]: I0224 00:18:02.362944 5122 generic.go:358] "Generic (PLEG): container finished" podID="f77aac90-1868-4c57-8629-c69449252bd9" containerID="980f8f96bec9f8dc7b954ca06e40b86141f817f3b24633aad1ecce6e7d420148" exitCode=0 Feb 24 00:18:02 crc kubenswrapper[5122]: I0224 00:18:02.362998 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" event={"ID":"f77aac90-1868-4c57-8629-c69449252bd9","Type":"ContainerDied","Data":"980f8f96bec9f8dc7b954ca06e40b86141f817f3b24633aad1ecce6e7d420148"} Feb 24 00:18:03 crc kubenswrapper[5122]: I0224 00:18:03.597124 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:03 crc kubenswrapper[5122]: I0224 00:18:03.659366 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pfs4\" (UniqueName: \"kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4\") pod \"f77aac90-1868-4c57-8629-c69449252bd9\" (UID: \"f77aac90-1868-4c57-8629-c69449252bd9\") " Feb 24 00:18:03 crc kubenswrapper[5122]: I0224 00:18:03.664919 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4" (OuterVolumeSpecName: "kube-api-access-8pfs4") pod "f77aac90-1868-4c57-8629-c69449252bd9" (UID: "f77aac90-1868-4c57-8629-c69449252bd9"). InnerVolumeSpecName "kube-api-access-8pfs4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:18:03 crc kubenswrapper[5122]: I0224 00:18:03.761499 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pfs4\" (UniqueName: \"kubernetes.io/projected/f77aac90-1868-4c57-8629-c69449252bd9-kube-api-access-8pfs4\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:04 crc kubenswrapper[5122]: I0224 00:18:04.378445 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" event={"ID":"f77aac90-1868-4c57-8629-c69449252bd9","Type":"ContainerDied","Data":"db380caba0209753f7ebe444344363b7085a4131cace6b9bdb24bb27910f6d79"} Feb 24 00:18:04 crc kubenswrapper[5122]: I0224 00:18:04.378536 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db380caba0209753f7ebe444344363b7085a4131cace6b9bdb24bb27910f6d79" Feb 24 00:18:04 crc kubenswrapper[5122]: I0224 00:18:04.378472 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531538-ch8p9" Feb 24 00:18:27 crc kubenswrapper[5122]: I0224 00:18:27.115136 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:18:27 crc kubenswrapper[5122]: I0224 00:18:27.115672 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:18:54 crc kubenswrapper[5122]: I0224 00:18:54.057549 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:18:54 crc kubenswrapper[5122]: I0224 00:18:54.062473 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:18:57 crc kubenswrapper[5122]: I0224 00:18:57.115126 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:18:57 crc kubenswrapper[5122]: I0224 00:18:57.115446 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.186377 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.190037 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="kube-rbac-proxy" containerID="cri-o://43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.190517 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="ovnkube-cluster-manager" containerID="cri-o://97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.379177 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.398743 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b4r7n"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399427 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-controller" containerID="cri-o://6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399546 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399620 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-acl-logging" containerID="cri-o://e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399725 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="northd" containerID="cri-o://2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399725 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-node" containerID="cri-o://3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.399772 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="sbdb" containerID="cri-o://4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.400951 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="nbdb" containerID="cri-o://31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.417653 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418399 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="kube-rbac-proxy" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418424 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="kube-rbac-proxy" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418464 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f77aac90-1868-4c57-8629-c69449252bd9" containerName="oc" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418475 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77aac90-1868-4c57-8629-c69449252bd9" containerName="oc" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418491 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="ovnkube-cluster-manager" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418504 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="ovnkube-cluster-manager" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418633 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="ovnkube-cluster-manager" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418646 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="f77aac90-1868-4c57-8629-c69449252bd9" containerName="oc" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.418674 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerName="kube-rbac-proxy" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.422711 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.430226 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovnkube-controller" containerID="cri-o://ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" gracePeriod=30 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.454869 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert\") pod \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455100 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w5q6\" (UniqueName: \"kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6\") pod \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455165 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config\") pod \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455185 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides\") pod \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\" (UID: \"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455283 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455412 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtdk6\" (UniqueName: \"kubernetes.io/projected/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-kube-api-access-wtdk6\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455450 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.455473 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.457009 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" (UID: "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.457060 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" (UID: "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.463018 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6" (OuterVolumeSpecName: "kube-api-access-2w5q6") pod "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" (UID: "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85"). InnerVolumeSpecName "kube-api-access-2w5q6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.463048 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" (UID: "03f5a8e7-4852-4e7b-8dca-ce9f9facfe85"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.557310 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtdk6\" (UniqueName: \"kubernetes.io/projected/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-kube-api-access-wtdk6\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558214 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558240 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558265 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558341 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558351 5122 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558360 5122 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.558370 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2w5q6\" (UniqueName: \"kubernetes.io/projected/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85-kube-api-access-2w5q6\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.559522 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.559633 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.563867 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.575657 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtdk6\" (UniqueName: \"kubernetes.io/projected/9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc-kube-api-access-wtdk6\") pod \"ovnkube-control-plane-97c9b6c48-shn9p\" (UID: \"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.675960 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b4r7n_b3ea2c06-ac71-4ff2-aba9-54e26871039e/ovn-acl-logging/0.log" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.676424 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b4r7n_b3ea2c06-ac71-4ff2-aba9-54e26871039e/ovn-controller/0.log" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.676900 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.725969 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nk2qn"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726591 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kubecfg-setup" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726616 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kubecfg-setup" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726635 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-acl-logging" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726644 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-acl-logging" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726659 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726668 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726681 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-node" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726688 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-node" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726700 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="sbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726707 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="sbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726722 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovnkube-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726730 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovnkube-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726739 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726747 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726759 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="northd" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726770 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="northd" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726786 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="nbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726796 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="nbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726914 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726932 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726940 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="sbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726954 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="nbdb" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726964 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovnkube-controller" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726972 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="kube-rbac-proxy-node" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726982 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="northd" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.726993 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerName="ovn-acl-logging" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.732705 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.745094 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.761281 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.761328 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.761386 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.761753 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.761866 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762478 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zk4n\" (UniqueName: \"kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762559 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762588 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762652 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762712 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762763 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762792 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762830 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762925 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762955 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.762982 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.763495 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.763634 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket" (OuterVolumeSpecName: "log-socket") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.764004 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766374 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n" (OuterVolumeSpecName: "kube-api-access-4zk4n") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "kube-api-access-4zk4n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766738 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766806 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766904 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash" (OuterVolumeSpecName: "host-slash") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766943 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.766963 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767014 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767116 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767166 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767244 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767269 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767448 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767693 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767697 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767723 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767797 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767821 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767873 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log" (OuterVolumeSpecName: "node-log") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767920 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd\") pod \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\" (UID: \"b3ea2c06-ac71-4ff2-aba9-54e26871039e\") " Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.767966 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768675 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9m5\" (UniqueName: \"kubernetes.io/projected/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-kube-api-access-nq9m5\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768765 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768809 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-netd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768839 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-kubelet\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768904 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovn-node-metrics-cert\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768937 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-log-socket\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768963 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-script-lib\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.768990 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-config\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769013 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-systemd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769086 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-systemd-units\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769108 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769135 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-slash\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769157 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-env-overrides\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769198 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-netns\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769236 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-ovn\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769260 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-etc-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769332 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-node-log\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769358 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-var-lib-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769387 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769414 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-bin\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769460 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769475 5122 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769488 5122 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769500 5122 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769513 5122 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769524 5122 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769535 5122 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769548 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4zk4n\" (UniqueName: \"kubernetes.io/projected/b3ea2c06-ac71-4ff2-aba9-54e26871039e-kube-api-access-4zk4n\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769559 5122 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769571 5122 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769583 5122 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769595 5122 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769606 5122 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769617 5122 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769628 5122 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769640 5122 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769651 5122 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ea2c06-ac71-4ff2-aba9-54e26871039e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769662 5122 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.769674 5122 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.783381 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b3ea2c06-ac71-4ff2-aba9-54e26871039e" (UID: "b3ea2c06-ac71-4ff2-aba9-54e26871039e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.786340 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.857339 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.857400 5122 generic.go:358] "Generic (PLEG): container finished" podID="b5f97112-ba2a-46c0-a285-a845d2f96be9" containerID="6ae8c4088d8f6d9782d4238d3f74f219f9f4ebb6252995e8203d6f7002583268" exitCode=2 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.857443 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jz28d" event={"ID":"b5f97112-ba2a-46c0-a285-a845d2f96be9","Type":"ContainerDied","Data":"6ae8c4088d8f6d9782d4238d3f74f219f9f4ebb6252995e8203d6f7002583268"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.858013 5122 scope.go:117] "RemoveContainer" containerID="6ae8c4088d8f6d9782d4238d3f74f219f9f4ebb6252995e8203d6f7002583268" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.859764 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" event={"ID":"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc","Type":"ContainerStarted","Data":"6de6814d4a523218275182ba9b0d682ecb08ea9fcb6595ca187e3fc439aa8ee5"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.862974 5122 generic.go:358] "Generic (PLEG): container finished" podID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerID="97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.863008 5122 generic.go:358] "Generic (PLEG): container finished" podID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" containerID="43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.863288 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerDied","Data":"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.863326 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerDied","Data":"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.863349 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" event={"ID":"03f5a8e7-4852-4e7b-8dca-ce9f9facfe85","Type":"ContainerDied","Data":"306fa3f2b6c3715596ad445fa4eb619d877e86fbb86e477d60c9e18cd4bdcc4d"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.863370 5122 scope.go:117] "RemoveContainer" containerID="97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.864356 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871279 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871426 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871502 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-netd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871553 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-kubelet\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871613 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovn-node-metrics-cert\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871672 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-log-socket\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871704 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-script-lib\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871751 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-config\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871781 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-systemd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871886 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-netd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.871938 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-kubelet\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.874497 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-systemd-units\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.874553 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.874600 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-slash\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.874637 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-env-overrides\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.874972 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-netns\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875014 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-ovn\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875038 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-etc-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875158 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-node-log\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875185 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-var-lib-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875215 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875253 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-bin\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875286 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9m5\" (UniqueName: \"kubernetes.io/projected/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-kube-api-access-nq9m5\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875415 5122 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ea2c06-ac71-4ff2-aba9-54e26871039e-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.875990 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-systemd-units\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.876031 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.876116 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-slash\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.878369 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-env-overrides\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.878534 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-run-netns\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.878643 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-ovn\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.878713 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-etc-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.878934 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovn-node-metrics-cert\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879284 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-log-socket\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879358 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-node-log\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879656 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-var-lib-openvswitch\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879722 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879768 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-host-cni-bin\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.879927 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-config\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.880154 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-run-systemd\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.884722 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-ovnkube-script-lib\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.892869 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b4r7n_b3ea2c06-ac71-4ff2-aba9-54e26871039e/ovn-acl-logging/0.log" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.894230 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b4r7n_b3ea2c06-ac71-4ff2-aba9-54e26871039e/ovn-controller/0.log" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895002 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895145 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895217 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895289 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895356 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895519 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" exitCode=0 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895578 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" exitCode=143 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895631 5122 generic.go:358] "Generic (PLEG): container finished" podID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" exitCode=143 Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895606 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895657 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895846 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895922 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.895982 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896040 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896118 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896175 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896234 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896309 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896373 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896436 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896500 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896567 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896624 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896673 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896721 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896779 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896832 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896884 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896931 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.896998 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897064 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897153 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897201 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897253 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897307 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897363 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897418 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897467 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897516 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897583 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897647 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897715 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897782 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897837 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897902 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b4r7n" event={"ID":"b3ea2c06-ac71-4ff2-aba9-54e26871039e","Type":"ContainerDied","Data":"e04d77c6147dad4200aa5e175c277ec89bbd7f0e8770e58347edb6da6dbebf98"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.897970 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898034 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898104 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898171 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898233 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898306 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898362 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898413 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.898476 5122 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.901013 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9m5\" (UniqueName: \"kubernetes.io/projected/f14ef8d5-3c4e-4f86-8933-bf40ab75759b-kube-api-access-nq9m5\") pod \"ovnkube-node-nk2qn\" (UID: \"f14ef8d5-3c4e-4f86-8933-bf40ab75759b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.906090 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.910255 5122 scope.go:117] "RemoveContainer" containerID="43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.910549 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-48fw7"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.940061 5122 scope.go:117] "RemoveContainer" containerID="97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" Feb 24 00:19:10 crc kubenswrapper[5122]: E0224 00:19:10.940800 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676\": container with ID starting with 97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676 not found: ID does not exist" containerID="97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.940839 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676"} err="failed to get container status \"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676\": rpc error: code = NotFound desc = could not find container \"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676\": container with ID starting with 97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676 not found: ID does not exist" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.940863 5122 scope.go:117] "RemoveContainer" containerID="43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" Feb 24 00:19:10 crc kubenswrapper[5122]: E0224 00:19:10.941484 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2\": container with ID starting with 43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2 not found: ID does not exist" containerID="43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.941558 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2"} err="failed to get container status \"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2\": rpc error: code = NotFound desc = could not find container \"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2\": container with ID starting with 43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2 not found: ID does not exist" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.941594 5122 scope.go:117] "RemoveContainer" containerID="97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.942160 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676"} err="failed to get container status \"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676\": rpc error: code = NotFound desc = could not find container \"97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676\": container with ID starting with 97e3f1ce3f982d175ddbfa14d0eca77928abbeda3fa93b24bd46b9ced160c676 not found: ID does not exist" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.942285 5122 scope.go:117] "RemoveContainer" containerID="43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.943701 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b4r7n"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.943890 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2"} err="failed to get container status \"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2\": rpc error: code = NotFound desc = could not find container \"43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2\": container with ID starting with 43d65c74c4471c8df117dc784a102b480ad54d682118424a452f3849576385d2 not found: ID does not exist" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.943922 5122 scope.go:117] "RemoveContainer" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.953511 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b4r7n"] Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.959004 5122 scope.go:117] "RemoveContainer" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.983582 5122 scope.go:117] "RemoveContainer" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:10 crc kubenswrapper[5122]: I0224 00:19:10.999001 5122 scope.go:117] "RemoveContainer" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.015065 5122 scope.go:117] "RemoveContainer" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.037547 5122 scope.go:117] "RemoveContainer" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.058258 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.067220 5122 scope.go:117] "RemoveContainer" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.081011 5122 scope.go:117] "RemoveContainer" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: W0224 00:19:11.084653 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf14ef8d5_3c4e_4f86_8933_bf40ab75759b.slice/crio-748e4b5ac86d8d52ba3dcff2412398e175a1b45ddbb44e19a2190b44975d6e13 WatchSource:0}: Error finding container 748e4b5ac86d8d52ba3dcff2412398e175a1b45ddbb44e19a2190b44975d6e13: Status 404 returned error can't find the container with id 748e4b5ac86d8d52ba3dcff2412398e175a1b45ddbb44e19a2190b44975d6e13 Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.106311 5122 scope.go:117] "RemoveContainer" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.121515 5122 scope.go:117] "RemoveContainer" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.121918 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": container with ID starting with ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0 not found: ID does not exist" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122026 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} err="failed to get container status \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": rpc error: code = NotFound desc = could not find container \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": container with ID starting with ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122193 5122 scope.go:117] "RemoveContainer" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.122639 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": container with ID starting with 4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818 not found: ID does not exist" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122682 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} err="failed to get container status \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": rpc error: code = NotFound desc = could not find container \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": container with ID starting with 4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122709 5122 scope.go:117] "RemoveContainer" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.122917 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": container with ID starting with 31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0 not found: ID does not exist" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122946 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} err="failed to get container status \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": rpc error: code = NotFound desc = could not find container \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": container with ID starting with 31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.122962 5122 scope.go:117] "RemoveContainer" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.123308 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": container with ID starting with 2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77 not found: ID does not exist" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.123333 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} err="failed to get container status \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": rpc error: code = NotFound desc = could not find container \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": container with ID starting with 2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.123348 5122 scope.go:117] "RemoveContainer" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.123559 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": container with ID starting with 7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339 not found: ID does not exist" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.123710 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} err="failed to get container status \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": rpc error: code = NotFound desc = could not find container \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": container with ID starting with 7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.123802 5122 scope.go:117] "RemoveContainer" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.124131 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": container with ID starting with 3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983 not found: ID does not exist" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.124222 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} err="failed to get container status \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": rpc error: code = NotFound desc = could not find container \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": container with ID starting with 3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.124314 5122 scope.go:117] "RemoveContainer" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.125046 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": container with ID starting with e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea not found: ID does not exist" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125097 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} err="failed to get container status \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": rpc error: code = NotFound desc = could not find container \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": container with ID starting with e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125121 5122 scope.go:117] "RemoveContainer" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.125398 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": container with ID starting with 6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535 not found: ID does not exist" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125435 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} err="failed to get container status \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": rpc error: code = NotFound desc = could not find container \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": container with ID starting with 6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125454 5122 scope.go:117] "RemoveContainer" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: E0224 00:19:11.125721 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": container with ID starting with 51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a not found: ID does not exist" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125832 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} err="failed to get container status \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": rpc error: code = NotFound desc = could not find container \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": container with ID starting with 51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.125922 5122 scope.go:117] "RemoveContainer" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126295 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} err="failed to get container status \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": rpc error: code = NotFound desc = could not find container \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": container with ID starting with ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126311 5122 scope.go:117] "RemoveContainer" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126496 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} err="failed to get container status \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": rpc error: code = NotFound desc = could not find container \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": container with ID starting with 4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126515 5122 scope.go:117] "RemoveContainer" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126826 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} err="failed to get container status \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": rpc error: code = NotFound desc = could not find container \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": container with ID starting with 31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.126858 5122 scope.go:117] "RemoveContainer" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127103 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} err="failed to get container status \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": rpc error: code = NotFound desc = could not find container \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": container with ID starting with 2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127130 5122 scope.go:117] "RemoveContainer" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127444 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} err="failed to get container status \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": rpc error: code = NotFound desc = could not find container \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": container with ID starting with 7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127482 5122 scope.go:117] "RemoveContainer" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127743 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} err="failed to get container status \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": rpc error: code = NotFound desc = could not find container \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": container with ID starting with 3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127768 5122 scope.go:117] "RemoveContainer" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127967 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} err="failed to get container status \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": rpc error: code = NotFound desc = could not find container \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": container with ID starting with e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.127990 5122 scope.go:117] "RemoveContainer" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.128272 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} err="failed to get container status \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": rpc error: code = NotFound desc = could not find container \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": container with ID starting with 6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.128299 5122 scope.go:117] "RemoveContainer" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.128592 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} err="failed to get container status \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": rpc error: code = NotFound desc = could not find container \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": container with ID starting with 51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.128704 5122 scope.go:117] "RemoveContainer" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129088 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} err="failed to get container status \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": rpc error: code = NotFound desc = could not find container \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": container with ID starting with ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129131 5122 scope.go:117] "RemoveContainer" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129387 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} err="failed to get container status \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": rpc error: code = NotFound desc = could not find container \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": container with ID starting with 4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129413 5122 scope.go:117] "RemoveContainer" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129762 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} err="failed to get container status \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": rpc error: code = NotFound desc = could not find container \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": container with ID starting with 31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129780 5122 scope.go:117] "RemoveContainer" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.129978 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} err="failed to get container status \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": rpc error: code = NotFound desc = could not find container \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": container with ID starting with 2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.130091 5122 scope.go:117] "RemoveContainer" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.130380 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} err="failed to get container status \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": rpc error: code = NotFound desc = could not find container \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": container with ID starting with 7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.130469 5122 scope.go:117] "RemoveContainer" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.130752 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} err="failed to get container status \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": rpc error: code = NotFound desc = could not find container \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": container with ID starting with 3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.130768 5122 scope.go:117] "RemoveContainer" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131017 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} err="failed to get container status \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": rpc error: code = NotFound desc = could not find container \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": container with ID starting with e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131032 5122 scope.go:117] "RemoveContainer" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131395 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} err="failed to get container status \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": rpc error: code = NotFound desc = could not find container \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": container with ID starting with 6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131423 5122 scope.go:117] "RemoveContainer" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131632 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} err="failed to get container status \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": rpc error: code = NotFound desc = could not find container \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": container with ID starting with 51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.131650 5122 scope.go:117] "RemoveContainer" containerID="ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.132720 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0"} err="failed to get container status \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": rpc error: code = NotFound desc = could not find container \"ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0\": container with ID starting with ee376d414c0b644d8bf58976d54052bf59d59cb44f75408231a37a54827edec0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.132818 5122 scope.go:117] "RemoveContainer" containerID="4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.133147 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818"} err="failed to get container status \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": rpc error: code = NotFound desc = could not find container \"4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818\": container with ID starting with 4e2c2c89500c5c4c31385963d9623a06117cd4990ffd6906998538b797e9e818 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.133252 5122 scope.go:117] "RemoveContainer" containerID="31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.137157 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0"} err="failed to get container status \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": rpc error: code = NotFound desc = could not find container \"31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0\": container with ID starting with 31e0ab0aec90328772d549a288780f027c341b029d80864fce031f9cf470bbd0 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.137193 5122 scope.go:117] "RemoveContainer" containerID="2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.137701 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77"} err="failed to get container status \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": rpc error: code = NotFound desc = could not find container \"2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77\": container with ID starting with 2a470261ad5fb96a1cca868827115990155b2f118495d1a6e891bb902dfb4b77 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.137744 5122 scope.go:117] "RemoveContainer" containerID="7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.138168 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339"} err="failed to get container status \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": rpc error: code = NotFound desc = could not find container \"7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339\": container with ID starting with 7bfb20eb72462f9c1ba7f11223bb1b4e0198c73a80184295992acba4d05fa339 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.138288 5122 scope.go:117] "RemoveContainer" containerID="3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.138648 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983"} err="failed to get container status \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": rpc error: code = NotFound desc = could not find container \"3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983\": container with ID starting with 3f1431e037eb09078479a17302fa1fc5926dea10a603cece3b69161c983b4983 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.138681 5122 scope.go:117] "RemoveContainer" containerID="e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.138960 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea"} err="failed to get container status \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": rpc error: code = NotFound desc = could not find container \"e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea\": container with ID starting with e1111f64e08ab63faccae61ab7c2133e6a77449a89c87f479d8cdf2dd7cca0ea not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.139068 5122 scope.go:117] "RemoveContainer" containerID="6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.140211 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535"} err="failed to get container status \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": rpc error: code = NotFound desc = could not find container \"6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535\": container with ID starting with 6687cc6bf0486b2c1dfb2f1a5433df50b6d1261dc3d24dcc35b6b2068faf5535 not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.140246 5122 scope.go:117] "RemoveContainer" containerID="51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.140623 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a"} err="failed to get container status \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": rpc error: code = NotFound desc = could not find container \"51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a\": container with ID starting with 51b47edb781570c696c6ed0cd25f7debb557d72ae17272c99875dfea47eb355a not found: ID does not exist" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.790201 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f5a8e7-4852-4e7b-8dca-ce9f9facfe85" path="/var/lib/kubelet/pods/03f5a8e7-4852-4e7b-8dca-ce9f9facfe85/volumes" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.792058 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ea2c06-ac71-4ff2-aba9-54e26871039e" path="/var/lib/kubelet/pods/b3ea2c06-ac71-4ff2-aba9-54e26871039e/volumes" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.914391 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" event={"ID":"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc","Type":"ContainerStarted","Data":"dc43d6a2c814057815b6af89c9ca971c60941bd8177b38698e53c208920699c4"} Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.914454 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" event={"ID":"9ea2ce95-6d52-47bb-aad7-4bfc4a88f8bc","Type":"ContainerStarted","Data":"84a3d5ee40ace0169bf0f4b844fce2f3892cdee78cdcf8231a8d6ad4833357ac"} Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.934569 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.934722 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jz28d" event={"ID":"b5f97112-ba2a-46c0-a285-a845d2f96be9","Type":"ContainerStarted","Data":"d7fd70f31ef8042ffc90da0befe0b79141924925fee35da14106679e9cadc0e6"} Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.937717 5122 generic.go:358] "Generic (PLEG): container finished" podID="f14ef8d5-3c4e-4f86-8933-bf40ab75759b" containerID="7ccc5eaa2d496d8b6d699f514faf8e71ef55554fd104c5d6462e5d95deef383e" exitCode=0 Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.937993 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerDied","Data":"7ccc5eaa2d496d8b6d699f514faf8e71ef55554fd104c5d6462e5d95deef383e"} Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.938218 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"748e4b5ac86d8d52ba3dcff2412398e175a1b45ddbb44e19a2190b44975d6e13"} Feb 24 00:19:11 crc kubenswrapper[5122]: I0224 00:19:11.959298 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-shn9p" podStartSLOduration=1.959270297 podStartE2EDuration="1.959270297s" podCreationTimestamp="2026-02-24 00:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:19:11.957639094 +0000 UTC m=+619.047093707" watchObservedRunningTime="2026-02-24 00:19:11.959270297 +0000 UTC m=+619.048724850" Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955294 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"e7b1c2c626295b6b0468d6bde5c1fa3b6755d14956db9834f7e9aaf705fe8506"} Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955802 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"c46cd183183fa465ceda6a952539bc6d4661ff515295592596cf5c1264e32fff"} Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955819 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"ab2d480a7f5d71eb9264711405d6fd496f1f3ddddc477d189980c69208e47e68"} Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955834 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"a23d9e6cd6ff37b96bcd133b1b670b46e2d132d5a94a7e982dd281e877a0acb5"} Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955848 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"926753ba4279e784a25cbcd814bff3ce62e531361cb7dfbb3f211d543d879851"} Feb 24 00:19:12 crc kubenswrapper[5122]: I0224 00:19:12.955862 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"5a531138022e0bf56c0111b5ff4f24819d576686a3859b774da40f4e006e3f47"} Feb 24 00:19:15 crc kubenswrapper[5122]: I0224 00:19:15.984857 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"f0f8eb2e7ac84c860530cb21110ca65cf630226e4c99eac84eada7cd60e70dc7"} Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.015882 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" event={"ID":"f14ef8d5-3c4e-4f86-8933-bf40ab75759b","Type":"ContainerStarted","Data":"9d409c43d11cac0794d0bf776d820e5323643927fb2a43e24bbae168b11334db"} Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.016411 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.016443 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.016467 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.059862 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.062942 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" podStartSLOduration=9.06291683 podStartE2EDuration="9.06291683s" podCreationTimestamp="2026-02-24 00:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:19:19.060246848 +0000 UTC m=+626.149701431" watchObservedRunningTime="2026-02-24 00:19:19.06291683 +0000 UTC m=+626.152371383" Feb 24 00:19:19 crc kubenswrapper[5122]: I0224 00:19:19.075758 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:19:27 crc kubenswrapper[5122]: I0224 00:19:27.115426 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:19:27 crc kubenswrapper[5122]: I0224 00:19:27.116230 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:27 crc kubenswrapper[5122]: I0224 00:19:27.116335 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:19:27 crc kubenswrapper[5122]: I0224 00:19:27.117271 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:19:27 crc kubenswrapper[5122]: I0224 00:19:27.117383 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd" gracePeriod=600 Feb 24 00:19:28 crc kubenswrapper[5122]: I0224 00:19:28.078817 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd" exitCode=0 Feb 24 00:19:28 crc kubenswrapper[5122]: I0224 00:19:28.079310 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd"} Feb 24 00:19:28 crc kubenswrapper[5122]: I0224 00:19:28.079449 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a"} Feb 24 00:19:28 crc kubenswrapper[5122]: I0224 00:19:28.079492 5122 scope.go:117] "RemoveContainer" containerID="50ee2266507123df66125337ecf3ff8ca0f7771d42782902e0efdef0eafd857f" Feb 24 00:19:51 crc kubenswrapper[5122]: I0224 00:19:51.064207 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk2qn" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.130891 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531540-ppfkv"] Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.140670 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531540-ppfkv"] Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.140889 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.143920 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.144561 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.144804 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.283658 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgndd\" (UniqueName: \"kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd\") pod \"auto-csr-approver-29531540-ppfkv\" (UID: \"63be25af-7c2b-453a-904e-98f05c102e49\") " pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.385539 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kgndd\" (UniqueName: \"kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd\") pod \"auto-csr-approver-29531540-ppfkv\" (UID: \"63be25af-7c2b-453a-904e-98f05c102e49\") " pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.419192 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgndd\" (UniqueName: \"kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd\") pod \"auto-csr-approver-29531540-ppfkv\" (UID: \"63be25af-7c2b-453a-904e-98f05c102e49\") " pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.502250 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:00 crc kubenswrapper[5122]: I0224 00:20:00.752730 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531540-ppfkv"] Feb 24 00:20:01 crc kubenswrapper[5122]: I0224 00:20:01.352121 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" event={"ID":"63be25af-7c2b-453a-904e-98f05c102e49","Type":"ContainerStarted","Data":"16a0958518e3dd445f1701a79c6df30b4897d37c2507d5e8fc54d6d57e235822"} Feb 24 00:20:02 crc kubenswrapper[5122]: I0224 00:20:02.373345 5122 generic.go:358] "Generic (PLEG): container finished" podID="63be25af-7c2b-453a-904e-98f05c102e49" containerID="12b72396236406a2b4f1d88f75c3ab8e7fe0ed01d048573b6f2f4bad104558db" exitCode=0 Feb 24 00:20:02 crc kubenswrapper[5122]: I0224 00:20:02.373942 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" event={"ID":"63be25af-7c2b-453a-904e-98f05c102e49","Type":"ContainerDied","Data":"12b72396236406a2b4f1d88f75c3ab8e7fe0ed01d048573b6f2f4bad104558db"} Feb 24 00:20:03 crc kubenswrapper[5122]: I0224 00:20:03.577334 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:03 crc kubenswrapper[5122]: I0224 00:20:03.730412 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgndd\" (UniqueName: \"kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd\") pod \"63be25af-7c2b-453a-904e-98f05c102e49\" (UID: \"63be25af-7c2b-453a-904e-98f05c102e49\") " Feb 24 00:20:03 crc kubenswrapper[5122]: I0224 00:20:03.736371 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd" (OuterVolumeSpecName: "kube-api-access-kgndd") pod "63be25af-7c2b-453a-904e-98f05c102e49" (UID: "63be25af-7c2b-453a-904e-98f05c102e49"). InnerVolumeSpecName "kube-api-access-kgndd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:03 crc kubenswrapper[5122]: I0224 00:20:03.832687 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgndd\" (UniqueName: \"kubernetes.io/projected/63be25af-7c2b-453a-904e-98f05c102e49-kube-api-access-kgndd\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:04 crc kubenswrapper[5122]: I0224 00:20:04.391313 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" event={"ID":"63be25af-7c2b-453a-904e-98f05c102e49","Type":"ContainerDied","Data":"16a0958518e3dd445f1701a79c6df30b4897d37c2507d5e8fc54d6d57e235822"} Feb 24 00:20:04 crc kubenswrapper[5122]: I0224 00:20:04.391369 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16a0958518e3dd445f1701a79c6df30b4897d37c2507d5e8fc54d6d57e235822" Feb 24 00:20:04 crc kubenswrapper[5122]: I0224 00:20:04.391386 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531540-ppfkv" Feb 24 00:20:12 crc kubenswrapper[5122]: I0224 00:20:12.772771 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:20:12 crc kubenswrapper[5122]: I0224 00:20:12.773745 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4j57" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="registry-server" containerID="cri-o://014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e" gracePeriod=30 Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.097021 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.164063 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bqb8\" (UniqueName: \"kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8\") pod \"a97f6def-aff6-4a05-862a-959aa7b87606\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.164251 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities\") pod \"a97f6def-aff6-4a05-862a-959aa7b87606\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.164273 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content\") pod \"a97f6def-aff6-4a05-862a-959aa7b87606\" (UID: \"a97f6def-aff6-4a05-862a-959aa7b87606\") " Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.165557 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities" (OuterVolumeSpecName: "utilities") pod "a97f6def-aff6-4a05-862a-959aa7b87606" (UID: "a97f6def-aff6-4a05-862a-959aa7b87606"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.169766 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8" (OuterVolumeSpecName: "kube-api-access-7bqb8") pod "a97f6def-aff6-4a05-862a-959aa7b87606" (UID: "a97f6def-aff6-4a05-862a-959aa7b87606"). InnerVolumeSpecName "kube-api-access-7bqb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.179535 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a97f6def-aff6-4a05-862a-959aa7b87606" (UID: "a97f6def-aff6-4a05-862a-959aa7b87606"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.265646 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.265692 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97f6def-aff6-4a05-862a-959aa7b87606-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.265713 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bqb8\" (UniqueName: \"kubernetes.io/projected/a97f6def-aff6-4a05-862a-959aa7b87606-kube-api-access-7bqb8\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.452407 5122 generic.go:358] "Generic (PLEG): container finished" podID="a97f6def-aff6-4a05-862a-959aa7b87606" containerID="014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e" exitCode=0 Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.452517 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4j57" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.452638 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerDied","Data":"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e"} Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.452676 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4j57" event={"ID":"a97f6def-aff6-4a05-862a-959aa7b87606","Type":"ContainerDied","Data":"308f66a4108a62ef53f551537dcc2204654915c7194f475ab5f111fa5d6ded80"} Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.452702 5122 scope.go:117] "RemoveContainer" containerID="014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.474156 5122 scope.go:117] "RemoveContainer" containerID="586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.492108 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.496752 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4j57"] Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.511323 5122 scope.go:117] "RemoveContainer" containerID="13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.526782 5122 scope.go:117] "RemoveContainer" containerID="014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e" Feb 24 00:20:13 crc kubenswrapper[5122]: E0224 00:20:13.527177 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e\": container with ID starting with 014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e not found: ID does not exist" containerID="014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.527222 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e"} err="failed to get container status \"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e\": rpc error: code = NotFound desc = could not find container \"014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e\": container with ID starting with 014376c26ade4f7290bfc0ef96093c0e5c9b75d82e157fa73e5b51f70fe8621e not found: ID does not exist" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.527252 5122 scope.go:117] "RemoveContainer" containerID="586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7" Feb 24 00:20:13 crc kubenswrapper[5122]: E0224 00:20:13.527754 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7\": container with ID starting with 586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7 not found: ID does not exist" containerID="586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.527869 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7"} err="failed to get container status \"586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7\": rpc error: code = NotFound desc = could not find container \"586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7\": container with ID starting with 586e131771d708263cb30cb207de476a07cc26981d41153bcb516e4cd98239d7 not found: ID does not exist" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.527967 5122 scope.go:117] "RemoveContainer" containerID="13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4" Feb 24 00:20:13 crc kubenswrapper[5122]: E0224 00:20:13.528397 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4\": container with ID starting with 13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4 not found: ID does not exist" containerID="13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.528457 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4"} err="failed to get container status \"13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4\": rpc error: code = NotFound desc = could not find container \"13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4\": container with ID starting with 13db7cb5f342a0be887f6ea0c948b711aeb198a2c80f8dd8366b52fd1c25dde4 not found: ID does not exist" Feb 24 00:20:13 crc kubenswrapper[5122]: I0224 00:20:13.783502 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" path="/var/lib/kubelet/pods/a97f6def-aff6-4a05-862a-959aa7b87606/volumes" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.415567 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz"] Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416547 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="extract-content" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416570 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="extract-content" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416615 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="registry-server" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416626 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="registry-server" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416638 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="63be25af-7c2b-453a-904e-98f05c102e49" containerName="oc" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416649 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="63be25af-7c2b-453a-904e-98f05c102e49" containerName="oc" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416665 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="extract-utilities" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416678 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="extract-utilities" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416814 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="a97f6def-aff6-4a05-862a-959aa7b87606" containerName="registry-server" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.416844 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="63be25af-7c2b-453a-904e-98f05c102e49" containerName="oc" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.433485 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz"] Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.433641 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.436064 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.506936 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8x5g\" (UniqueName: \"kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.507018 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.507162 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.608857 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.608923 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m8x5g\" (UniqueName: \"kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.608975 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.609368 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.609612 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.629027 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8x5g\" (UniqueName: \"kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:16 crc kubenswrapper[5122]: I0224 00:20:16.759167 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:17 crc kubenswrapper[5122]: I0224 00:20:17.192380 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz"] Feb 24 00:20:17 crc kubenswrapper[5122]: I0224 00:20:17.482861 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerStarted","Data":"7eb3234682a5dfc0db6ef97bf505c123048fd1f6134d8b512997adff860a02eb"} Feb 24 00:20:17 crc kubenswrapper[5122]: I0224 00:20:17.483268 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerStarted","Data":"0414fae8eb5fd58c9a44489bc23f1d5f568f8bc71ea7b421444446d4d9e5c8dc"} Feb 24 00:20:18 crc kubenswrapper[5122]: I0224 00:20:18.490621 5122 generic.go:358] "Generic (PLEG): container finished" podID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerID="7eb3234682a5dfc0db6ef97bf505c123048fd1f6134d8b512997adff860a02eb" exitCode=0 Feb 24 00:20:18 crc kubenswrapper[5122]: I0224 00:20:18.490675 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerDied","Data":"7eb3234682a5dfc0db6ef97bf505c123048fd1f6134d8b512997adff860a02eb"} Feb 24 00:20:20 crc kubenswrapper[5122]: I0224 00:20:20.528277 5122 generic.go:358] "Generic (PLEG): container finished" podID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerID="e6e7767a181f0faedced0c5a90ce9576a96f529e1e4e7becd3abdfbe191469cf" exitCode=0 Feb 24 00:20:20 crc kubenswrapper[5122]: I0224 00:20:20.528391 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerDied","Data":"e6e7767a181f0faedced0c5a90ce9576a96f529e1e4e7becd3abdfbe191469cf"} Feb 24 00:20:21 crc kubenswrapper[5122]: I0224 00:20:21.537687 5122 generic.go:358] "Generic (PLEG): container finished" podID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerID="7e2a087938e8dd3e947a471e522da5ea680bc61764908781a4f4f24df315f53e" exitCode=0 Feb 24 00:20:21 crc kubenswrapper[5122]: I0224 00:20:21.537739 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerDied","Data":"7e2a087938e8dd3e947a471e522da5ea680bc61764908781a4f4f24df315f53e"} Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.439560 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc"] Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.836853 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc"] Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.837022 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.885801 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtcx\" (UniqueName: \"kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.885858 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.885897 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.987264 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thtcx\" (UniqueName: \"kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.987648 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.987689 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.988060 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:22 crc kubenswrapper[5122]: I0224 00:20:22.988062 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.007237 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtcx\" (UniqueName: \"kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.030544 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.089620 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8x5g\" (UniqueName: \"kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g\") pod \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.089750 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle\") pod \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.089790 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util\") pod \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\" (UID: \"0718ee02-3adb-41a4-aff8-2e4778f60c2d\") " Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.092233 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle" (OuterVolumeSpecName: "bundle") pod "0718ee02-3adb-41a4-aff8-2e4778f60c2d" (UID: "0718ee02-3adb-41a4-aff8-2e4778f60c2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.096276 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g" (OuterVolumeSpecName: "kube-api-access-m8x5g") pod "0718ee02-3adb-41a4-aff8-2e4778f60c2d" (UID: "0718ee02-3adb-41a4-aff8-2e4778f60c2d"). InnerVolumeSpecName "kube-api-access-m8x5g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.099387 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util" (OuterVolumeSpecName: "util") pod "0718ee02-3adb-41a4-aff8-2e4778f60c2d" (UID: "0718ee02-3adb-41a4-aff8-2e4778f60c2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.153155 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.190758 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m8x5g\" (UniqueName: \"kubernetes.io/projected/0718ee02-3adb-41a4-aff8-2e4778f60c2d-kube-api-access-m8x5g\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.190782 5122 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.190790 5122 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0718ee02-3adb-41a4-aff8-2e4778f60c2d-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.347877 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc"] Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.414282 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7"] Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416166 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="util" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416199 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="util" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416235 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="extract" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416245 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="extract" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416272 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="pull" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416284 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="pull" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.416731 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="0718ee02-3adb-41a4-aff8-2e4778f60c2d" containerName="extract" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.427991 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7"] Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.428145 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.494987 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.495075 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh6b\" (UniqueName: \"kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.495134 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.551668 5122 generic.go:358] "Generic (PLEG): container finished" podID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerID="41422f8637bcb9c3218485989f8bda70a4e05fa843973cfdec6063afc0097c66" exitCode=0 Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.551892 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" event={"ID":"ce4ede87-e12f-4cba-bc09-e436e147fe31","Type":"ContainerDied","Data":"41422f8637bcb9c3218485989f8bda70a4e05fa843973cfdec6063afc0097c66"} Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.551991 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" event={"ID":"ce4ede87-e12f-4cba-bc09-e436e147fe31","Type":"ContainerStarted","Data":"47f3e5118d9f59778fa794f9cec3b247d07d775c7e66d5f22b250b11bb9c72e3"} Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.554645 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.554744 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz" event={"ID":"0718ee02-3adb-41a4-aff8-2e4778f60c2d","Type":"ContainerDied","Data":"0414fae8eb5fd58c9a44489bc23f1d5f568f8bc71ea7b421444446d4d9e5c8dc"} Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.554788 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0414fae8eb5fd58c9a44489bc23f1d5f568f8bc71ea7b421444446d4d9e5c8dc" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.596548 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.596681 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh6b\" (UniqueName: \"kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.596718 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.597812 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.597856 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.614301 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh6b\" (UniqueName: \"kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.753175 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:23 crc kubenswrapper[5122]: I0224 00:20:23.957541 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7"] Feb 24 00:20:23 crc kubenswrapper[5122]: W0224 00:20:23.965883 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4473ff86_fc0e_40e2_8698_19569caf6272.slice/crio-4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe WatchSource:0}: Error finding container 4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe: Status 404 returned error can't find the container with id 4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe Feb 24 00:20:24 crc kubenswrapper[5122]: I0224 00:20:24.571537 5122 generic.go:358] "Generic (PLEG): container finished" podID="4473ff86-fc0e-40e2-8698-19569caf6272" containerID="229c6f8cd60173c6f2fa6d45e459ac97ca1af55b9c0ea92cbc215e3a228e459c" exitCode=0 Feb 24 00:20:24 crc kubenswrapper[5122]: I0224 00:20:24.571790 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" event={"ID":"4473ff86-fc0e-40e2-8698-19569caf6272","Type":"ContainerDied","Data":"229c6f8cd60173c6f2fa6d45e459ac97ca1af55b9c0ea92cbc215e3a228e459c"} Feb 24 00:20:24 crc kubenswrapper[5122]: I0224 00:20:24.571823 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" event={"ID":"4473ff86-fc0e-40e2-8698-19569caf6272","Type":"ContainerStarted","Data":"4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe"} Feb 24 00:20:24 crc kubenswrapper[5122]: I0224 00:20:24.574831 5122 generic.go:358] "Generic (PLEG): container finished" podID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerID="38dcfbab4e662e1559ba778309b9d6754cd3a8ab6bfdbc521a03893bfdad3e84" exitCode=0 Feb 24 00:20:24 crc kubenswrapper[5122]: I0224 00:20:24.574942 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" event={"ID":"ce4ede87-e12f-4cba-bc09-e436e147fe31","Type":"ContainerDied","Data":"38dcfbab4e662e1559ba778309b9d6754cd3a8ab6bfdbc521a03893bfdad3e84"} Feb 24 00:20:25 crc kubenswrapper[5122]: I0224 00:20:25.584543 5122 generic.go:358] "Generic (PLEG): container finished" podID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerID="3e4a89a8fa927e91a816941b343641266854a6d746f9e9365ef983b2292a956d" exitCode=0 Feb 24 00:20:25 crc kubenswrapper[5122]: I0224 00:20:25.584621 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" event={"ID":"ce4ede87-e12f-4cba-bc09-e436e147fe31","Type":"ContainerDied","Data":"3e4a89a8fa927e91a816941b343641266854a6d746f9e9365ef983b2292a956d"} Feb 24 00:20:26 crc kubenswrapper[5122]: I0224 00:20:26.591104 5122 generic.go:358] "Generic (PLEG): container finished" podID="4473ff86-fc0e-40e2-8698-19569caf6272" containerID="a0a9103ff3271afafd8ae7880aae686d1730b006a3ca26d512bcdf43aab7ccbb" exitCode=0 Feb 24 00:20:26 crc kubenswrapper[5122]: I0224 00:20:26.591172 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" event={"ID":"4473ff86-fc0e-40e2-8698-19569caf6272","Type":"ContainerDied","Data":"a0a9103ff3271afafd8ae7880aae686d1730b006a3ca26d512bcdf43aab7ccbb"} Feb 24 00:20:26 crc kubenswrapper[5122]: I0224 00:20:26.884676 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.035681 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util\") pod \"ce4ede87-e12f-4cba-bc09-e436e147fe31\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.035723 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle\") pod \"ce4ede87-e12f-4cba-bc09-e436e147fe31\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.035755 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtcx\" (UniqueName: \"kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx\") pod \"ce4ede87-e12f-4cba-bc09-e436e147fe31\" (UID: \"ce4ede87-e12f-4cba-bc09-e436e147fe31\") " Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.036775 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle" (OuterVolumeSpecName: "bundle") pod "ce4ede87-e12f-4cba-bc09-e436e147fe31" (UID: "ce4ede87-e12f-4cba-bc09-e436e147fe31"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.049114 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx" (OuterVolumeSpecName: "kube-api-access-thtcx") pod "ce4ede87-e12f-4cba-bc09-e436e147fe31" (UID: "ce4ede87-e12f-4cba-bc09-e436e147fe31"). InnerVolumeSpecName "kube-api-access-thtcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.058058 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util" (OuterVolumeSpecName: "util") pod "ce4ede87-e12f-4cba-bc09-e436e147fe31" (UID: "ce4ede87-e12f-4cba-bc09-e436e147fe31"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.137008 5122 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.137052 5122 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4ede87-e12f-4cba-bc09-e436e147fe31-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.137065 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thtcx\" (UniqueName: \"kubernetes.io/projected/ce4ede87-e12f-4cba-bc09-e436e147fe31-kube-api-access-thtcx\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.599633 5122 generic.go:358] "Generic (PLEG): container finished" podID="4473ff86-fc0e-40e2-8698-19569caf6272" containerID="931741ce701cd050d22cb7b3d3ef1cd721f112734f5b0c0c7217ef673c5ee4c0" exitCode=0 Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.599681 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" event={"ID":"4473ff86-fc0e-40e2-8698-19569caf6272","Type":"ContainerDied","Data":"931741ce701cd050d22cb7b3d3ef1cd721f112734f5b0c0c7217ef673c5ee4c0"} Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.602455 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" event={"ID":"ce4ede87-e12f-4cba-bc09-e436e147fe31","Type":"ContainerDied","Data":"47f3e5118d9f59778fa794f9cec3b247d07d775c7e66d5f22b250b11bb9c72e3"} Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.602490 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc" Feb 24 00:20:27 crc kubenswrapper[5122]: I0224 00:20:27.602498 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47f3e5118d9f59778fa794f9cec3b247d07d775c7e66d5f22b250b11bb9c72e3" Feb 24 00:20:28 crc kubenswrapper[5122]: I0224 00:20:28.956448 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.063987 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnh6b\" (UniqueName: \"kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b\") pod \"4473ff86-fc0e-40e2-8698-19569caf6272\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.064042 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle\") pod \"4473ff86-fc0e-40e2-8698-19569caf6272\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.064092 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util\") pod \"4473ff86-fc0e-40e2-8698-19569caf6272\" (UID: \"4473ff86-fc0e-40e2-8698-19569caf6272\") " Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.064770 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle" (OuterVolumeSpecName: "bundle") pod "4473ff86-fc0e-40e2-8698-19569caf6272" (UID: "4473ff86-fc0e-40e2-8698-19569caf6272"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.069428 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b" (OuterVolumeSpecName: "kube-api-access-lnh6b") pod "4473ff86-fc0e-40e2-8698-19569caf6272" (UID: "4473ff86-fc0e-40e2-8698-19569caf6272"). InnerVolumeSpecName "kube-api-access-lnh6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.078736 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util" (OuterVolumeSpecName: "util") pod "4473ff86-fc0e-40e2-8698-19569caf6272" (UID: "4473ff86-fc0e-40e2-8698-19569caf6272"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.165245 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnh6b\" (UniqueName: \"kubernetes.io/projected/4473ff86-fc0e-40e2-8698-19569caf6272-kube-api-access-lnh6b\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.165312 5122 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.165326 5122 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4473ff86-fc0e-40e2-8698-19569caf6272-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.615356 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" event={"ID":"4473ff86-fc0e-40e2-8698-19569caf6272","Type":"ContainerDied","Data":"4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe"} Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.615623 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cddfdd126b32d45f153b1b752d75a444b2540f2a70cfcfa1ec799a7a4f67fbe" Feb 24 00:20:29 crc kubenswrapper[5122]: I0224 00:20:29.615437 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.413322 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl"] Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414050 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="util" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414066 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="util" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414104 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414112 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414133 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="pull" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414141 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="pull" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414170 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414178 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414197 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="pull" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414205 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="pull" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414215 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="util" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414222 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="util" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414347 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="4473ff86-fc0e-40e2-8698-19569caf6272" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.414364 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce4ede87-e12f-4cba-bc09-e436e147fe31" containerName="extract" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.481551 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl"] Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.481702 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.484475 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.596790 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.596977 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.597138 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx66b\" (UniqueName: \"kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.698258 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.698322 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xx66b\" (UniqueName: \"kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.698447 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.698815 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.698834 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.718568 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx66b\" (UniqueName: \"kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:31 crc kubenswrapper[5122]: I0224 00:20:31.796724 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:32 crc kubenswrapper[5122]: I0224 00:20:32.288989 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl"] Feb 24 00:20:32 crc kubenswrapper[5122]: I0224 00:20:32.631009 5122 generic.go:358] "Generic (PLEG): container finished" podID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerID="a49523e2075f83c687be29c46805989c38747d2a0b185af9a18a5e5ddb056815" exitCode=0 Feb 24 00:20:32 crc kubenswrapper[5122]: I0224 00:20:32.631113 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" event={"ID":"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9","Type":"ContainerDied","Data":"a49523e2075f83c687be29c46805989c38747d2a0b185af9a18a5e5ddb056815"} Feb 24 00:20:32 crc kubenswrapper[5122]: I0224 00:20:32.631155 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" event={"ID":"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9","Type":"ContainerStarted","Data":"bf18c952cbd5a126e6ac53b44cc2b93ba38e74d3bd376c0af47e9eacdeb6048e"} Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.263901 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.337684 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.337845 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.340758 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.340957 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-xcc5n\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.341114 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.381892 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.451312 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.451390 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.453492 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-qp988\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.453574 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.483001 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.483056 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.483138 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.523145 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4n7q\" (UniqueName: \"kubernetes.io/projected/e8237e2d-982e-4f5a-80ea-597aaebed4a1-kube-api-access-z4n7q\") pod \"obo-prometheus-operator-9bc85b4bf-qgg49\" (UID: \"e8237e2d-982e-4f5a-80ea-597aaebed4a1\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.530013 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6n2zd"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.624411 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4n7q\" (UniqueName: \"kubernetes.io/projected/e8237e2d-982e-4f5a-80ea-597aaebed4a1-kube-api-access-z4n7q\") pod \"obo-prometheus-operator-9bc85b4bf-qgg49\" (UID: \"e8237e2d-982e-4f5a-80ea-597aaebed4a1\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.624647 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.624668 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.624707 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.624805 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.643979 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4n7q\" (UniqueName: \"kubernetes.io/projected/e8237e2d-982e-4f5a-80ea-597aaebed4a1-kube-api-access-z4n7q\") pod \"obo-prometheus-operator-9bc85b4bf-qgg49\" (UID: \"e8237e2d-982e-4f5a-80ea-597aaebed4a1\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.655273 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.665459 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6n2zd"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.665639 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.669195 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.670291 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-tfmdg\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.695362 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lftx5"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.725634 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.725689 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.725733 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.725775 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.729941 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.730535 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97106b9b-a13e-4f80-8da1-ff7885b694b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-mkqml\" (UID: \"97106b9b-a13e-4f80-8da1-ff7885b694b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.731867 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.732259 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc26f0ac-7fb0-4223-a613-38006bf7ed17-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt\" (UID: \"bc26f0ac-7fb0-4223-a613-38006bf7ed17\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.765376 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.798811 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.800320 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.810498 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-twwhs\"" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.816683 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lftx5"] Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.826381 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd92ea8-69cf-470b-a538-07cf775804b2-observability-operator-tls\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.826689 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpts\" (UniqueName: \"kubernetes.io/projected/ccd92ea8-69cf-470b-a538-07cf775804b2-kube-api-access-xwpts\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.935771 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e000557b-7587-4dce-913f-5a9a064194ad-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.935926 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ggc\" (UniqueName: \"kubernetes.io/projected/e000557b-7587-4dce-913f-5a9a064194ad-kube-api-access-49ggc\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.935955 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd92ea8-69cf-470b-a538-07cf775804b2-observability-operator-tls\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.935992 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpts\" (UniqueName: \"kubernetes.io/projected/ccd92ea8-69cf-470b-a538-07cf775804b2-kube-api-access-xwpts\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.941884 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccd92ea8-69cf-470b-a538-07cf775804b2-observability-operator-tls\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:33 crc kubenswrapper[5122]: I0224 00:20:33.983943 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpts\" (UniqueName: \"kubernetes.io/projected/ccd92ea8-69cf-470b-a538-07cf775804b2-kube-api-access-xwpts\") pod \"observability-operator-85c68dddb-6n2zd\" (UID: \"ccd92ea8-69cf-470b-a538-07cf775804b2\") " pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.027486 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.037633 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e000557b-7587-4dce-913f-5a9a064194ad-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.037771 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49ggc\" (UniqueName: \"kubernetes.io/projected/e000557b-7587-4dce-913f-5a9a064194ad-kube-api-access-49ggc\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.038642 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e000557b-7587-4dce-913f-5a9a064194ad-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.077123 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ggc\" (UniqueName: \"kubernetes.io/projected/e000557b-7587-4dce-913f-5a9a064194ad-kube-api-access-49ggc\") pod \"perses-operator-669c9f96b5-lftx5\" (UID: \"e000557b-7587-4dce-913f-5a9a064194ad\") " pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.126931 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49"] Feb 24 00:20:34 crc kubenswrapper[5122]: W0224 00:20:34.133924 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8237e2d_982e_4f5a_80ea_597aaebed4a1.slice/crio-ea6d6b71150ab6550e17dfa43bea9266d4e2302f5178868f4246e0abe38e6e9b WatchSource:0}: Error finding container ea6d6b71150ab6550e17dfa43bea9266d4e2302f5178868f4246e0abe38e6e9b: Status 404 returned error can't find the container with id ea6d6b71150ab6550e17dfa43bea9266d4e2302f5178868f4246e0abe38e6e9b Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.179350 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.188090 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml"] Feb 24 00:20:34 crc kubenswrapper[5122]: W0224 00:20:34.208665 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97106b9b_a13e_4f80_8da1_ff7885b694b8.slice/crio-7e388754cb4a45b2d846ca460c967a752207a507a94ae183dbb57c942e7e6e51 WatchSource:0}: Error finding container 7e388754cb4a45b2d846ca460c967a752207a507a94ae183dbb57c942e7e6e51: Status 404 returned error can't find the container with id 7e388754cb4a45b2d846ca460c967a752207a507a94ae183dbb57c942e7e6e51 Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.438613 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt"] Feb 24 00:20:34 crc kubenswrapper[5122]: W0224 00:20:34.444593 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc26f0ac_7fb0_4223_a613_38006bf7ed17.slice/crio-e81bb5e3d49ab12be22e9dc05ec2e3d87fced8c88e88a9ac49679ec2a3dd2ce0 WatchSource:0}: Error finding container e81bb5e3d49ab12be22e9dc05ec2e3d87fced8c88e88a9ac49679ec2a3dd2ce0: Status 404 returned error can't find the container with id e81bb5e3d49ab12be22e9dc05ec2e3d87fced8c88e88a9ac49679ec2a3dd2ce0 Feb 24 00:20:34 crc kubenswrapper[5122]: W0224 00:20:34.527832 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccd92ea8_69cf_470b_a538_07cf775804b2.slice/crio-a557f02de0161014efd1e937fa3ef33368bc3891e43602aecf534016fdcb0317 WatchSource:0}: Error finding container a557f02de0161014efd1e937fa3ef33368bc3891e43602aecf534016fdcb0317: Status 404 returned error can't find the container with id a557f02de0161014efd1e937fa3ef33368bc3891e43602aecf534016fdcb0317 Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.528597 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6n2zd"] Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.643521 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" event={"ID":"ccd92ea8-69cf-470b-a538-07cf775804b2","Type":"ContainerStarted","Data":"a557f02de0161014efd1e937fa3ef33368bc3891e43602aecf534016fdcb0317"} Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.645314 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" event={"ID":"bc26f0ac-7fb0-4223-a613-38006bf7ed17","Type":"ContainerStarted","Data":"e81bb5e3d49ab12be22e9dc05ec2e3d87fced8c88e88a9ac49679ec2a3dd2ce0"} Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.646308 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" event={"ID":"e8237e2d-982e-4f5a-80ea-597aaebed4a1","Type":"ContainerStarted","Data":"ea6d6b71150ab6550e17dfa43bea9266d4e2302f5178868f4246e0abe38e6e9b"} Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.647536 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" event={"ID":"97106b9b-a13e-4f80-8da1-ff7885b694b8","Type":"ContainerStarted","Data":"7e388754cb4a45b2d846ca460c967a752207a507a94ae183dbb57c942e7e6e51"} Feb 24 00:20:34 crc kubenswrapper[5122]: I0224 00:20:34.672785 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lftx5"] Feb 24 00:20:35 crc kubenswrapper[5122]: I0224 00:20:35.659722 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" event={"ID":"e000557b-7587-4dce-913f-5a9a064194ad","Type":"ContainerStarted","Data":"8a360f20e0e90862d07e5ab880c1b3d17834a02b0a84d161faab353a67488e57"} Feb 24 00:20:37 crc kubenswrapper[5122]: I0224 00:20:37.849046 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-pmfvq"] Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.034408 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-pmfvq"] Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.034527 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.055432 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-cx4g4\"" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.055771 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.058275 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.204583 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748vs\" (UniqueName: \"kubernetes.io/projected/9fb9138a-b621-43e2-b489-a7fbaacdbdf5-kube-api-access-748vs\") pod \"interconnect-operator-78b9bd8798-pmfvq\" (UID: \"9fb9138a-b621-43e2-b489-a7fbaacdbdf5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.306124 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-748vs\" (UniqueName: \"kubernetes.io/projected/9fb9138a-b621-43e2-b489-a7fbaacdbdf5-kube-api-access-748vs\") pod \"interconnect-operator-78b9bd8798-pmfvq\" (UID: \"9fb9138a-b621-43e2-b489-a7fbaacdbdf5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.333756 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-748vs\" (UniqueName: \"kubernetes.io/projected/9fb9138a-b621-43e2-b489-a7fbaacdbdf5-kube-api-access-748vs\") pod \"interconnect-operator-78b9bd8798-pmfvq\" (UID: \"9fb9138a-b621-43e2-b489-a7fbaacdbdf5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" Feb 24 00:20:38 crc kubenswrapper[5122]: I0224 00:20:38.361303 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.396859 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7f674d4ff9-mj85p"] Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.802970 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f674d4ff9-mj85p"] Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.803137 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.806467 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-qt98p\"" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.806467 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.840807 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-webhook-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.840888 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-apiservice-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.840942 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbcq\" (UniqueName: \"kubernetes.io/projected/7c0ca16f-37b3-448c-918a-86916007515c-kube-api-access-wjbcq\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.942328 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-webhook-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.942416 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-apiservice-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.942461 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbcq\" (UniqueName: \"kubernetes.io/projected/7c0ca16f-37b3-448c-918a-86916007515c-kube-api-access-wjbcq\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.947771 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-webhook-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.949211 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c0ca16f-37b3-448c-918a-86916007515c-apiservice-cert\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:40 crc kubenswrapper[5122]: I0224 00:20:40.962862 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbcq\" (UniqueName: \"kubernetes.io/projected/7c0ca16f-37b3-448c-918a-86916007515c-kube-api-access-wjbcq\") pod \"elastic-operator-7f674d4ff9-mj85p\" (UID: \"7c0ca16f-37b3-448c-918a-86916007515c\") " pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:41 crc kubenswrapper[5122]: I0224 00:20:41.125809 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.354533 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-pmfvq"] Feb 24 00:20:48 crc kubenswrapper[5122]: W0224 00:20:48.356002 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fb9138a_b621_43e2_b489_a7fbaacdbdf5.slice/crio-598cc678fb72fc8a3b0137bde568a59b358fa614bb9f98eaea76c7866abe1daa WatchSource:0}: Error finding container 598cc678fb72fc8a3b0137bde568a59b358fa614bb9f98eaea76c7866abe1daa: Status 404 returned error can't find the container with id 598cc678fb72fc8a3b0137bde568a59b358fa614bb9f98eaea76c7866abe1daa Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.488708 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f674d4ff9-mj85p"] Feb 24 00:20:48 crc kubenswrapper[5122]: W0224 00:20:48.512946 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0ca16f_37b3_448c_918a_86916007515c.slice/crio-765d71ea04f799ff3372da46b46257e0b796a77f4a14d1ac0b7d39f7668d8741 WatchSource:0}: Error finding container 765d71ea04f799ff3372da46b46257e0b796a77f4a14d1ac0b7d39f7668d8741: Status 404 returned error can't find the container with id 765d71ea04f799ff3372da46b46257e0b796a77f4a14d1ac0b7d39f7668d8741 Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.745298 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" event={"ID":"e8237e2d-982e-4f5a-80ea-597aaebed4a1","Type":"ContainerStarted","Data":"011e111ada66c75fd8f5901d4ace3d91957bfe42d953f9de6b54ce56ee80ee87"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.748029 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" event={"ID":"97106b9b-a13e-4f80-8da1-ff7885b694b8","Type":"ContainerStarted","Data":"0a67fba30339bf994c5f6ab89e124808a8dfbe8f5e9c90b12e00bf49838e3f34"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.749746 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" event={"ID":"ccd92ea8-69cf-470b-a538-07cf775804b2","Type":"ContainerStarted","Data":"00fcd80f54b426940e318308da8b2f1bbb736e409841143a1f17dd93fdd22f16"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.750230 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.752854 5122 generic.go:358] "Generic (PLEG): container finished" podID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerID="5e16b981f5e37e453afdd918e300d3f1593cad99ba2cec6eb854953df7a6b3cc" exitCode=0 Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.752900 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" event={"ID":"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9","Type":"ContainerDied","Data":"5e16b981f5e37e453afdd918e300d3f1593cad99ba2cec6eb854953df7a6b3cc"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.755004 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" event={"ID":"e000557b-7587-4dce-913f-5a9a064194ad","Type":"ContainerStarted","Data":"9f0d29aeda5683c9ddd17cc688ebd51c83c8eed4ca2cb8c85fe6a3dde61273c6"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.755585 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.757285 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" event={"ID":"9fb9138a-b621-43e2-b489-a7fbaacdbdf5","Type":"ContainerStarted","Data":"598cc678fb72fc8a3b0137bde568a59b358fa614bb9f98eaea76c7866abe1daa"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.758535 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" event={"ID":"bc26f0ac-7fb0-4223-a613-38006bf7ed17","Type":"ContainerStarted","Data":"afbd483d6ee738bfc2381dafac1062ae08da16b515e868a441780731c51c2540"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.759395 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" event={"ID":"7c0ca16f-37b3-448c-918a-86916007515c","Type":"ContainerStarted","Data":"765d71ea04f799ff3372da46b46257e0b796a77f4a14d1ac0b7d39f7668d8741"} Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.767258 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-qgg49" podStartSLOduration=1.759906813 podStartE2EDuration="15.767241121s" podCreationTimestamp="2026-02-24 00:20:33 +0000 UTC" firstStartedPulling="2026-02-24 00:20:34.140189827 +0000 UTC m=+701.229644340" lastFinishedPulling="2026-02-24 00:20:48.147524135 +0000 UTC m=+715.236978648" observedRunningTime="2026-02-24 00:20:48.763108964 +0000 UTC m=+715.852563497" watchObservedRunningTime="2026-02-24 00:20:48.767241121 +0000 UTC m=+715.856695634" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.811449 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-mkqml" podStartSLOduration=1.9082016899999998 podStartE2EDuration="15.811423461s" podCreationTimestamp="2026-02-24 00:20:33 +0000 UTC" firstStartedPulling="2026-02-24 00:20:34.227575568 +0000 UTC m=+701.317030071" lastFinishedPulling="2026-02-24 00:20:48.130797329 +0000 UTC m=+715.220251842" observedRunningTime="2026-02-24 00:20:48.80676715 +0000 UTC m=+715.896221673" watchObservedRunningTime="2026-02-24 00:20:48.811423461 +0000 UTC m=+715.900878004" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.823318 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.877856 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-6n2zd" podStartSLOduration=2.25965447 podStartE2EDuration="15.877834959s" podCreationTimestamp="2026-02-24 00:20:33 +0000 UTC" firstStartedPulling="2026-02-24 00:20:34.530356522 +0000 UTC m=+701.619811025" lastFinishedPulling="2026-02-24 00:20:48.148537001 +0000 UTC m=+715.237991514" observedRunningTime="2026-02-24 00:20:48.851496324 +0000 UTC m=+715.940950837" watchObservedRunningTime="2026-02-24 00:20:48.877834959 +0000 UTC m=+715.967289472" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.879560 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" podStartSLOduration=2.343236537 podStartE2EDuration="15.879548243s" podCreationTimestamp="2026-02-24 00:20:33 +0000 UTC" firstStartedPulling="2026-02-24 00:20:34.688036568 +0000 UTC m=+701.777491081" lastFinishedPulling="2026-02-24 00:20:48.224348274 +0000 UTC m=+715.313802787" observedRunningTime="2026-02-24 00:20:48.871444963 +0000 UTC m=+715.960899486" watchObservedRunningTime="2026-02-24 00:20:48.879548243 +0000 UTC m=+715.969002756" Feb 24 00:20:48 crc kubenswrapper[5122]: I0224 00:20:48.897241 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt" podStartSLOduration=2.175647723 podStartE2EDuration="15.897224113s" podCreationTimestamp="2026-02-24 00:20:33 +0000 UTC" firstStartedPulling="2026-02-24 00:20:34.447623989 +0000 UTC m=+701.537078502" lastFinishedPulling="2026-02-24 00:20:48.169200379 +0000 UTC m=+715.258654892" observedRunningTime="2026-02-24 00:20:48.895899049 +0000 UTC m=+715.985353572" watchObservedRunningTime="2026-02-24 00:20:48.897224113 +0000 UTC m=+715.986678626" Feb 24 00:20:49 crc kubenswrapper[5122]: I0224 00:20:49.787096 5122 generic.go:358] "Generic (PLEG): container finished" podID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerID="70ae9816f174b76723c50bc26eb3eefaf83226c9784d5f7cfe59c3cc69b975df" exitCode=0 Feb 24 00:20:49 crc kubenswrapper[5122]: I0224 00:20:49.814465 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" event={"ID":"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9","Type":"ContainerDied","Data":"70ae9816f174b76723c50bc26eb3eefaf83226c9784d5f7cfe59c3cc69b975df"} Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.441219 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.494915 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util\") pod \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.494985 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle\") pod \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.495027 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx66b\" (UniqueName: \"kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b\") pod \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\" (UID: \"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9\") " Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.496317 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle" (OuterVolumeSpecName: "bundle") pod "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" (UID: "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.503382 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b" (OuterVolumeSpecName: "kube-api-access-xx66b") pod "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" (UID: "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9"). InnerVolumeSpecName "kube-api-access-xx66b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.516992 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util" (OuterVolumeSpecName: "util") pod "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" (UID: "3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.595970 5122 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.596325 5122 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.596338 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xx66b\" (UniqueName: \"kubernetes.io/projected/3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9-kube-api-access-xx66b\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.805248 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" event={"ID":"7c0ca16f-37b3-448c-918a-86916007515c","Type":"ContainerStarted","Data":"7be3bc4b4062c90899a2c878526616d1e7d7800b059abe4ef14944c9caa1a564"} Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.807895 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" event={"ID":"3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9","Type":"ContainerDied","Data":"bf18c952cbd5a126e6ac53b44cc2b93ba38e74d3bd376c0af47e9eacdeb6048e"} Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.807937 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf18c952cbd5a126e6ac53b44cc2b93ba38e74d3bd376c0af47e9eacdeb6048e" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.808013 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl" Feb 24 00:20:52 crc kubenswrapper[5122]: I0224 00:20:52.827454 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7f674d4ff9-mj85p" podStartSLOduration=8.873080596 podStartE2EDuration="12.827438536s" podCreationTimestamp="2026-02-24 00:20:40 +0000 UTC" firstStartedPulling="2026-02-24 00:20:48.516367743 +0000 UTC m=+715.605822256" lastFinishedPulling="2026-02-24 00:20:52.470725683 +0000 UTC m=+719.560180196" observedRunningTime="2026-02-24 00:20:52.826643375 +0000 UTC m=+719.916097908" watchObservedRunningTime="2026-02-24 00:20:52.827438536 +0000 UTC m=+719.916893049" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.787057 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789232 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="extract" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789259 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="extract" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789276 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="util" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789283 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="util" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789298 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="pull" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789305 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="pull" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.789775 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9" containerName="extract" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.836948 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.837197 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.842828 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.842833 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.843316 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.843484 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.843886 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.844104 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-8zkll\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.844149 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.844256 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.844390 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.913894 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.913984 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914011 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914062 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914105 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914161 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914205 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914229 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914570 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914618 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914638 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914677 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/40e78782-0cd0-484d-846c-a2b76a952ae4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914724 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914745 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:53 crc kubenswrapper[5122]: I0224 00:20:53.914774 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016320 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016363 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016398 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016582 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016602 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016849 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/40e78782-0cd0-484d-846c-a2b76a952ae4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016883 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016974 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.016996 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017031 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017131 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017163 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017180 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017259 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017281 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017349 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017465 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017543 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017715 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.017766 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.018443 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.019103 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.019351 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.022602 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/40e78782-0cd0-484d-846c-a2b76a952ae4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.024787 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.024924 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.026877 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.027159 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.039750 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.045247 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/40e78782-0cd0-484d-846c-a2b76a952ae4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"40e78782-0cd0-484d-846c-a2b76a952ae4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:54 crc kubenswrapper[5122]: I0224 00:20:54.160583 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:20:58 crc kubenswrapper[5122]: I0224 00:20:58.874103 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" event={"ID":"9fb9138a-b621-43e2-b489-a7fbaacdbdf5","Type":"ContainerStarted","Data":"ed4c9b134be4ce8ab17e4b3a293fe1ba61056b166ebd0cf6faf069495895d035"} Feb 24 00:20:58 crc kubenswrapper[5122]: I0224 00:20:58.891451 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:20:58 crc kubenswrapper[5122]: W0224 00:20:58.895474 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e78782_0cd0_484d_846c_a2b76a952ae4.slice/crio-2b2acd21d4d8cbf292fb728ca369355736157995402337a319596cb169dc04d3 WatchSource:0}: Error finding container 2b2acd21d4d8cbf292fb728ca369355736157995402337a319596cb169dc04d3: Status 404 returned error can't find the container with id 2b2acd21d4d8cbf292fb728ca369355736157995402337a319596cb169dc04d3 Feb 24 00:20:58 crc kubenswrapper[5122]: I0224 00:20:58.911421 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-pmfvq" podStartSLOduration=11.731519322 podStartE2EDuration="21.911395582s" podCreationTimestamp="2026-02-24 00:20:37 +0000 UTC" firstStartedPulling="2026-02-24 00:20:48.361918414 +0000 UTC m=+715.451372927" lastFinishedPulling="2026-02-24 00:20:58.541794674 +0000 UTC m=+725.631249187" observedRunningTime="2026-02-24 00:20:58.90824453 +0000 UTC m=+725.997699053" watchObservedRunningTime="2026-02-24 00:20:58.911395582 +0000 UTC m=+726.000850095" Feb 24 00:20:59 crc kubenswrapper[5122]: I0224 00:20:59.883575 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"40e78782-0cd0-484d-846c-a2b76a952ae4","Type":"ContainerStarted","Data":"2b2acd21d4d8cbf292fb728ca369355736157995402337a319596cb169dc04d3"} Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.235649 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.366007 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.366239 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.368316 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.368316 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.368785 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.369247 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514117 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514155 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514181 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514215 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514244 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514267 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514285 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514312 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514450 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514578 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514635 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmn8\" (UniqueName: \"kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.514662 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616264 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616347 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616385 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616519 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616557 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmn8\" (UniqueName: \"kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616602 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616670 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616693 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616739 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616828 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616912 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616954 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.616986 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617050 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617200 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617334 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617434 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617507 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617644 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617834 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.617928 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.625873 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.629765 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.646536 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmn8\" (UniqueName: \"kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8\") pod \"service-telemetry-operator-1-build\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.683934 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.797029 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-lftx5" Feb 24 00:21:00 crc kubenswrapper[5122]: I0224 00:21:00.961616 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:01 crc kubenswrapper[5122]: I0224 00:21:01.904382 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"cbb428d4-e310-4912-888a-c9fb27d0a82e","Type":"ContainerStarted","Data":"5131c43e4c13ee9460b8a29b8ff678096b39dffc14e66371cf51c27f95b03664"} Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.924467 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd"] Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.935036 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.940666 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.940791 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-lqnll\"" Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.941009 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 24 00:21:08 crc kubenswrapper[5122]: I0224 00:21:08.945605 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd"] Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.061812 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc70c93-75eb-417f-9f7f-565bb655bed0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.061876 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpb7s\" (UniqueName: \"kubernetes.io/projected/1fc70c93-75eb-417f-9f7f-565bb655bed0-kube-api-access-wpb7s\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.163065 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc70c93-75eb-417f-9f7f-565bb655bed0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.163146 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpb7s\" (UniqueName: \"kubernetes.io/projected/1fc70c93-75eb-417f-9f7f-565bb655bed0-kube-api-access-wpb7s\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.163552 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc70c93-75eb-417f-9f7f-565bb655bed0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.183591 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpb7s\" (UniqueName: \"kubernetes.io/projected/1fc70c93-75eb-417f-9f7f-565bb655bed0-kube-api-access-wpb7s\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-hjbxd\" (UID: \"1fc70c93-75eb-417f-9f7f-565bb655bed0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:09 crc kubenswrapper[5122]: I0224 00:21:09.264319 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" Feb 24 00:21:10 crc kubenswrapper[5122]: I0224 00:21:10.204125 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.865616 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.954272 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.954392 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.956476 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.956506 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Feb 24 00:21:11 crc kubenswrapper[5122]: I0224 00:21:11.956765 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.102801 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.102848 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.102875 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.102946 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103017 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103055 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103127 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103257 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103364 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mz8\" (UniqueName: \"kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103446 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103499 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.103527 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204618 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mz8\" (UniqueName: \"kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204681 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204707 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204729 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204765 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204879 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.204790 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205012 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205037 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205094 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205115 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205145 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205197 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205488 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205557 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205728 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.205728 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.206047 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.206056 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.206500 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.206603 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.210717 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.216610 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.219836 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mz8\" (UniqueName: \"kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8\") pod \"service-telemetry-operator-2-build\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:12 crc kubenswrapper[5122]: I0224 00:21:12.276408 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:21:16 crc kubenswrapper[5122]: I0224 00:21:16.804995 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:21:16 crc kubenswrapper[5122]: W0224 00:21:16.806718 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bf38ed_1ab3_4e4f_960f_19695f49f433.slice/crio-97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b WatchSource:0}: Error finding container 97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b: Status 404 returned error can't find the container with id 97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b Feb 24 00:21:16 crc kubenswrapper[5122]: I0224 00:21:16.848016 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd"] Feb 24 00:21:16 crc kubenswrapper[5122]: W0224 00:21:16.855906 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fc70c93_75eb_417f_9f7f_565bb655bed0.slice/crio-1f51f990dd44b1ba925df1123bbc9b71bbb1ec17624d4c9682d7c1d2558e816d WatchSource:0}: Error finding container 1f51f990dd44b1ba925df1123bbc9b71bbb1ec17624d4c9682d7c1d2558e816d: Status 404 returned error can't find the container with id 1f51f990dd44b1ba925df1123bbc9b71bbb1ec17624d4c9682d7c1d2558e816d Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.033616 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"40e78782-0cd0-484d-846c-a2b76a952ae4","Type":"ContainerStarted","Data":"55ecdc94a26a127aa3af858fbeee826ec039e2823e20a90bc3d7be86cc945e5d"} Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.035213 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerStarted","Data":"97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b"} Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.036984 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"cbb428d4-e310-4912-888a-c9fb27d0a82e","Type":"ContainerStarted","Data":"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d"} Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.037090 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="cbb428d4-e310-4912-888a-c9fb27d0a82e" containerName="manage-dockerfile" containerID="cri-o://26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d" gracePeriod=30 Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.038168 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" event={"ID":"1fc70c93-75eb-417f-9f7f-565bb655bed0","Type":"ContainerStarted","Data":"1f51f990dd44b1ba925df1123bbc9b71bbb1ec17624d4c9682d7c1d2558e816d"} Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.215026 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.248427 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.514433 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_cbb428d4-e310-4912-888a-c9fb27d0a82e/manage-dockerfile/0.log" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.514769 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687012 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmn8\" (UniqueName: \"kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687059 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687173 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687207 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687565 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687622 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687649 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687708 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687730 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687749 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687795 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687909 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687912 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687931 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles\") pod \"cbb428d4-e310-4912-888a-c9fb27d0a82e\" (UID: \"cbb428d4-e310-4912-888a-c9fb27d0a82e\") " Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.687958 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688049 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688149 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688283 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688297 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688308 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688320 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688330 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbb428d4-e310-4912-888a-c9fb27d0a82e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688327 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688493 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688521 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.688537 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.692452 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.692757 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.692961 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8" (OuterVolumeSpecName: "kube-api-access-ksmn8") pod "cbb428d4-e310-4912-888a-c9fb27d0a82e" (UID: "cbb428d4-e310-4912-888a-c9fb27d0a82e"). InnerVolumeSpecName "kube-api-access-ksmn8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790624 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790652 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbb428d4-e310-4912-888a-c9fb27d0a82e-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790662 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790670 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbb428d4-e310-4912-888a-c9fb27d0a82e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790678 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksmn8\" (UniqueName: \"kubernetes.io/projected/cbb428d4-e310-4912-888a-c9fb27d0a82e-kube-api-access-ksmn8\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790686 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:17 crc kubenswrapper[5122]: I0224 00:21:17.790695 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/cbb428d4-e310-4912-888a-c9fb27d0a82e-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.045796 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerStarted","Data":"27fb6180abc03cfc8e6c7287833bab274b950e54ed6dce2f53aa4cee74110266"} Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.047379 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_cbb428d4-e310-4912-888a-c9fb27d0a82e/manage-dockerfile/0.log" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.047447 5122 generic.go:358] "Generic (PLEG): container finished" podID="cbb428d4-e310-4912-888a-c9fb27d0a82e" containerID="26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d" exitCode=1 Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.048902 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.049364 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"cbb428d4-e310-4912-888a-c9fb27d0a82e","Type":"ContainerDied","Data":"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d"} Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.049392 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"cbb428d4-e310-4912-888a-c9fb27d0a82e","Type":"ContainerDied","Data":"5131c43e4c13ee9460b8a29b8ff678096b39dffc14e66371cf51c27f95b03664"} Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.049410 5122 scope.go:117] "RemoveContainer" containerID="26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.075688 5122 scope.go:117] "RemoveContainer" containerID="26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d" Feb 24 00:21:18 crc kubenswrapper[5122]: E0224 00:21:18.076240 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d\": container with ID starting with 26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d not found: ID does not exist" containerID="26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.076322 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d"} err="failed to get container status \"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d\": rpc error: code = NotFound desc = could not find container \"26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d\": container with ID starting with 26fb7bcbd4ed9740ffa4ceff46fa78150721a0e2bac15749112b79c7fe7fce6d not found: ID does not exist" Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.090867 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:18 crc kubenswrapper[5122]: I0224 00:21:18.098541 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:21:19 crc kubenswrapper[5122]: I0224 00:21:19.055708 5122 generic.go:358] "Generic (PLEG): container finished" podID="40e78782-0cd0-484d-846c-a2b76a952ae4" containerID="55ecdc94a26a127aa3af858fbeee826ec039e2823e20a90bc3d7be86cc945e5d" exitCode=0 Feb 24 00:21:19 crc kubenswrapper[5122]: I0224 00:21:19.055841 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"40e78782-0cd0-484d-846c-a2b76a952ae4","Type":"ContainerDied","Data":"55ecdc94a26a127aa3af858fbeee826ec039e2823e20a90bc3d7be86cc945e5d"} Feb 24 00:21:19 crc kubenswrapper[5122]: I0224 00:21:19.781510 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb428d4-e310-4912-888a-c9fb27d0a82e" path="/var/lib/kubelet/pods/cbb428d4-e310-4912-888a-c9fb27d0a82e/volumes" Feb 24 00:21:20 crc kubenswrapper[5122]: I0224 00:21:20.072752 5122 generic.go:358] "Generic (PLEG): container finished" podID="40e78782-0cd0-484d-846c-a2b76a952ae4" containerID="219e3fa6ee5d6db9b67f91dfe5bca1e9bb7a898f30d4dafb240985af9acd5f9d" exitCode=0 Feb 24 00:21:20 crc kubenswrapper[5122]: I0224 00:21:20.072893 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"40e78782-0cd0-484d-846c-a2b76a952ae4","Type":"ContainerDied","Data":"219e3fa6ee5d6db9b67f91dfe5bca1e9bb7a898f30d4dafb240985af9acd5f9d"} Feb 24 00:21:25 crc kubenswrapper[5122]: I0224 00:21:25.108419 5122 generic.go:358] "Generic (PLEG): container finished" podID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerID="27fb6180abc03cfc8e6c7287833bab274b950e54ed6dce2f53aa4cee74110266" exitCode=0 Feb 24 00:21:25 crc kubenswrapper[5122]: I0224 00:21:25.108469 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerDied","Data":"27fb6180abc03cfc8e6c7287833bab274b950e54ed6dce2f53aa4cee74110266"} Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.171463 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" event={"ID":"1fc70c93-75eb-417f-9f7f-565bb655bed0","Type":"ContainerStarted","Data":"ee1e42c70ff675f22d07cc21b236f0bf24eb1e0d9ce66f87ddf7a10f9feafe38"} Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.176130 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"40e78782-0cd0-484d-846c-a2b76a952ae4","Type":"ContainerStarted","Data":"6e746ea1df9e01167230348070311fd92ff36c99c2982c32df59f9973ab72831"} Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.176756 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.178218 5122 generic.go:358] "Generic (PLEG): container finished" podID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerID="7ac9c36a266d125eb1635e48c87372b50cbca84f74be24ba599c19afb504b38b" exitCode=0 Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.178294 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerDied","Data":"7ac9c36a266d125eb1635e48c87372b50cbca84f74be24ba599c19afb504b38b"} Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.196387 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-hjbxd" podStartSLOduration=9.561038188 podStartE2EDuration="18.196373005s" podCreationTimestamp="2026-02-24 00:21:08 +0000 UTC" firstStartedPulling="2026-02-24 00:21:16.858899138 +0000 UTC m=+743.948353651" lastFinishedPulling="2026-02-24 00:21:25.494233955 +0000 UTC m=+752.583688468" observedRunningTime="2026-02-24 00:21:26.192994947 +0000 UTC m=+753.282449470" watchObservedRunningTime="2026-02-24 00:21:26.196373005 +0000 UTC m=+753.285827518" Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.246180 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=15.484597962 podStartE2EDuration="33.24616084s" podCreationTimestamp="2026-02-24 00:20:53 +0000 UTC" firstStartedPulling="2026-02-24 00:20:58.898803674 +0000 UTC m=+725.988258187" lastFinishedPulling="2026-02-24 00:21:16.660366552 +0000 UTC m=+743.749821065" observedRunningTime="2026-02-24 00:21:26.244557889 +0000 UTC m=+753.334012422" watchObservedRunningTime="2026-02-24 00:21:26.24616084 +0000 UTC m=+753.335615353" Feb 24 00:21:26 crc kubenswrapper[5122]: I0224 00:21:26.292594 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_45bf38ed-1ab3-4e4f-960f-19695f49f433/manage-dockerfile/0.log" Feb 24 00:21:27 crc kubenswrapper[5122]: I0224 00:21:27.115260 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:21:27 crc kubenswrapper[5122]: I0224 00:21:27.115327 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:21:27 crc kubenswrapper[5122]: I0224 00:21:27.188062 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerStarted","Data":"98052cca4ce4852d410afe93a3d41bd059abca5796c66b6d2efefc5f6ecf72fe"} Feb 24 00:21:27 crc kubenswrapper[5122]: I0224 00:21:27.215261 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=16.215242478 podStartE2EDuration="16.215242478s" podCreationTimestamp="2026-02-24 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:21:27.210506115 +0000 UTC m=+754.299960718" watchObservedRunningTime="2026-02-24 00:21:27.215242478 +0000 UTC m=+754.304697001" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.179511 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.180538 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbb428d4-e310-4912-888a-c9fb27d0a82e" containerName="manage-dockerfile" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.180577 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb428d4-e310-4912-888a-c9fb27d0a82e" containerName="manage-dockerfile" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.180723 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbb428d4-e310-4912-888a-c9fb27d0a82e" containerName="manage-dockerfile" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.509900 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.510527 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.537723 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.537832 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmn2\" (UniqueName: \"kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.537954 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.639357 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.640065 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.640301 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmn2\" (UniqueName: \"kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.641258 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.642411 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.668802 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmn2\" (UniqueName: \"kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2\") pod \"certified-operators-zlxr9\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:28 crc kubenswrapper[5122]: I0224 00:21:28.843637 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:29 crc kubenswrapper[5122]: I0224 00:21:29.876649 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:30 crc kubenswrapper[5122]: I0224 00:21:30.206249 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerStarted","Data":"7ca986129e7732c18badde03d249c499e4d6653f4bbd732a397c1d42ef9b2c7a"} Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.241521 5122 generic.go:358] "Generic (PLEG): container finished" podID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerID="004acb044e08bf1158a6ff556a335167a581635956c4d59e781c43f328c08ddc" exitCode=0 Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.241723 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerDied","Data":"004acb044e08bf1158a6ff556a335167a581635956c4d59e781c43f328c08ddc"} Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.602212 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zj7k5"] Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.606625 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.614753 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zj7k5"] Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.618361 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.618636 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.618793 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-tdzxj\"" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.680627 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9r4k\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-kube-api-access-h9r4k\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.680683 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.782211 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h9r4k\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-kube-api-access-h9r4k\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.782264 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.802033 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.812024 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9r4k\" (UniqueName: \"kubernetes.io/projected/95a05fce-6267-44f4-ad33-ca687ffaeb63-kube-api-access-h9r4k\") pod \"cert-manager-cainjector-8966b78d4-zj7k5\" (UID: \"95a05fce-6267-44f4-ad33-ca687ffaeb63\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:31 crc kubenswrapper[5122]: I0224 00:21:31.926152 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" Feb 24 00:21:32 crc kubenswrapper[5122]: I0224 00:21:32.180890 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zj7k5"] Feb 24 00:21:32 crc kubenswrapper[5122]: W0224 00:21:32.182724 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a05fce_6267_44f4_ad33_ca687ffaeb63.slice/crio-1939a228e9beeded6a88fb8fe23f4ba5df99b241f8b6e55d8f6fc60702ccd818 WatchSource:0}: Error finding container 1939a228e9beeded6a88fb8fe23f4ba5df99b241f8b6e55d8f6fc60702ccd818: Status 404 returned error can't find the container with id 1939a228e9beeded6a88fb8fe23f4ba5df99b241f8b6e55d8f6fc60702ccd818 Feb 24 00:21:32 crc kubenswrapper[5122]: I0224 00:21:32.249697 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerStarted","Data":"f93e8ffad962d4909e6a965cd6fcbf9acb3eddc8289dcec6b192b3dde3d63fb3"} Feb 24 00:21:32 crc kubenswrapper[5122]: I0224 00:21:32.250876 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" event={"ID":"95a05fce-6267-44f4-ad33-ca687ffaeb63","Type":"ContainerStarted","Data":"1939a228e9beeded6a88fb8fe23f4ba5df99b241f8b6e55d8f6fc60702ccd818"} Feb 24 00:21:33 crc kubenswrapper[5122]: I0224 00:21:33.257660 5122 generic.go:358] "Generic (PLEG): container finished" podID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerID="f93e8ffad962d4909e6a965cd6fcbf9acb3eddc8289dcec6b192b3dde3d63fb3" exitCode=0 Feb 24 00:21:33 crc kubenswrapper[5122]: I0224 00:21:33.257720 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerDied","Data":"f93e8ffad962d4909e6a965cd6fcbf9acb3eddc8289dcec6b192b3dde3d63fb3"} Feb 24 00:21:34 crc kubenswrapper[5122]: I0224 00:21:34.267230 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerStarted","Data":"ab5192f4b4b61258b6afadfda7e69c46725711b7c347dc0c2161060f4a41ec00"} Feb 24 00:21:34 crc kubenswrapper[5122]: I0224 00:21:34.288473 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zlxr9" podStartSLOduration=5.5007856010000005 podStartE2EDuration="6.288457627s" podCreationTimestamp="2026-02-24 00:21:28 +0000 UTC" firstStartedPulling="2026-02-24 00:21:31.242720602 +0000 UTC m=+758.332175115" lastFinishedPulling="2026-02-24 00:21:32.030392618 +0000 UTC m=+759.119847141" observedRunningTime="2026-02-24 00:21:34.286742942 +0000 UTC m=+761.376197465" watchObservedRunningTime="2026-02-24 00:21:34.288457627 +0000 UTC m=+761.377912140" Feb 24 00:21:35 crc kubenswrapper[5122]: I0224 00:21:35.688871 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-579tp"] Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.300121 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-579tp"] Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.300301 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.302339 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-tfpp4\"" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.381304 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.381523 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6dd\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-kube-api-access-ql6dd\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.482549 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6dd\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-kube-api-access-ql6dd\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.482707 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.505284 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.507043 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6dd\" (UniqueName: \"kubernetes.io/projected/7361183c-a085-44ff-9f1f-b3d494a5296c-kube-api-access-ql6dd\") pod \"cert-manager-webhook-597b96b99b-579tp\" (UID: \"7361183c-a085-44ff-9f1f-b3d494a5296c\") " pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:36 crc kubenswrapper[5122]: I0224 00:21:36.619615 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.023702 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-579tp"] Feb 24 00:21:37 crc kubenswrapper[5122]: W0224 00:21:37.032466 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7361183c_a085_44ff_9f1f_b3d494a5296c.slice/crio-4574f5abe7fc726bd4d8bd2c7e62b1fc8465cd812ca19471c6d4d058fd401d46 WatchSource:0}: Error finding container 4574f5abe7fc726bd4d8bd2c7e62b1fc8465cd812ca19471c6d4d058fd401d46: Status 404 returned error can't find the container with id 4574f5abe7fc726bd4d8bd2c7e62b1fc8465cd812ca19471c6d4d058fd401d46 Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.293584 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" event={"ID":"95a05fce-6267-44f4-ad33-ca687ffaeb63","Type":"ContainerStarted","Data":"7858caec4463e34616d881827520927cb2610da39f948afcbc799150e97b1c4c"} Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.294743 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" event={"ID":"7361183c-a085-44ff-9f1f-b3d494a5296c","Type":"ContainerStarted","Data":"94d16c960eb42934c7445ee9a0ad91e0a4b801459acf74c5c18fa62950d704ae"} Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.294775 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" event={"ID":"7361183c-a085-44ff-9f1f-b3d494a5296c","Type":"ContainerStarted","Data":"4574f5abe7fc726bd4d8bd2c7e62b1fc8465cd812ca19471c6d4d058fd401d46"} Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.294887 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.307862 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-zj7k5" podStartSLOduration=1.574281301 podStartE2EDuration="6.307848967s" podCreationTimestamp="2026-02-24 00:21:31 +0000 UTC" firstStartedPulling="2026-02-24 00:21:32.184990481 +0000 UTC m=+759.274444994" lastFinishedPulling="2026-02-24 00:21:36.918558147 +0000 UTC m=+764.008012660" observedRunningTime="2026-02-24 00:21:37.305932197 +0000 UTC m=+764.395386710" watchObservedRunningTime="2026-02-24 00:21:37.307848967 +0000 UTC m=+764.397303480" Feb 24 00:21:37 crc kubenswrapper[5122]: I0224 00:21:37.323015 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" podStartSLOduration=2.323000491 podStartE2EDuration="2.323000491s" podCreationTimestamp="2026-02-24 00:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:21:37.321299977 +0000 UTC m=+764.410754500" watchObservedRunningTime="2026-02-24 00:21:37.323000491 +0000 UTC m=+764.412455004" Feb 24 00:21:38 crc kubenswrapper[5122]: I0224 00:21:38.647305 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="40e78782-0cd0-484d-846c-a2b76a952ae4" containerName="elasticsearch" probeResult="failure" output=< Feb 24 00:21:38 crc kubenswrapper[5122]: {"timestamp": "2026-02-24T00:21:38+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 24 00:21:38 crc kubenswrapper[5122]: > Feb 24 00:21:38 crc kubenswrapper[5122]: I0224 00:21:38.844721 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:38 crc kubenswrapper[5122]: I0224 00:21:38.844786 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:38 crc kubenswrapper[5122]: I0224 00:21:38.884987 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:39 crc kubenswrapper[5122]: I0224 00:21:39.370533 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:39 crc kubenswrapper[5122]: I0224 00:21:39.420771 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:41 crc kubenswrapper[5122]: I0224 00:21:41.324856 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zlxr9" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="registry-server" containerID="cri-o://ab5192f4b4b61258b6afadfda7e69c46725711b7c347dc0c2161060f4a41ec00" gracePeriod=2 Feb 24 00:21:43 crc kubenswrapper[5122]: I0224 00:21:43.303428 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-579tp" Feb 24 00:21:43 crc kubenswrapper[5122]: I0224 00:21:43.346527 5122 generic.go:358] "Generic (PLEG): container finished" podID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerID="ab5192f4b4b61258b6afadfda7e69c46725711b7c347dc0c2161060f4a41ec00" exitCode=0 Feb 24 00:21:43 crc kubenswrapper[5122]: I0224 00:21:43.346654 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerDied","Data":"ab5192f4b4b61258b6afadfda7e69c46725711b7c347dc0c2161060f4a41ec00"} Feb 24 00:21:43 crc kubenswrapper[5122]: I0224 00:21:43.581232 5122 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="40e78782-0cd0-484d-846c-a2b76a952ae4" containerName="elasticsearch" probeResult="failure" output=< Feb 24 00:21:43 crc kubenswrapper[5122]: {"timestamp": "2026-02-24T00:21:43+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 24 00:21:43 crc kubenswrapper[5122]: > Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.306214 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382467 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrmn2\" (UniqueName: \"kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2\") pod \"994ba1c7-5b42-457b-81fe-5dc11df7b170\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382519 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlxr9" event={"ID":"994ba1c7-5b42-457b-81fe-5dc11df7b170","Type":"ContainerDied","Data":"7ca986129e7732c18badde03d249c499e4d6653f4bbd732a397c1d42ef9b2c7a"} Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382550 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlxr9" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382555 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content\") pod \"994ba1c7-5b42-457b-81fe-5dc11df7b170\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382735 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities\") pod \"994ba1c7-5b42-457b-81fe-5dc11df7b170\" (UID: \"994ba1c7-5b42-457b-81fe-5dc11df7b170\") " Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.382559 5122 scope.go:117] "RemoveContainer" containerID="ab5192f4b4b61258b6afadfda7e69c46725711b7c347dc0c2161060f4a41ec00" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.383870 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities" (OuterVolumeSpecName: "utilities") pod "994ba1c7-5b42-457b-81fe-5dc11df7b170" (UID: "994ba1c7-5b42-457b-81fe-5dc11df7b170"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.410444 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2" (OuterVolumeSpecName: "kube-api-access-jrmn2") pod "994ba1c7-5b42-457b-81fe-5dc11df7b170" (UID: "994ba1c7-5b42-457b-81fe-5dc11df7b170"). InnerVolumeSpecName "kube-api-access-jrmn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.420330 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "994ba1c7-5b42-457b-81fe-5dc11df7b170" (UID: "994ba1c7-5b42-457b-81fe-5dc11df7b170"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.423561 5122 scope.go:117] "RemoveContainer" containerID="f93e8ffad962d4909e6a965cd6fcbf9acb3eddc8289dcec6b192b3dde3d63fb3" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.445214 5122 scope.go:117] "RemoveContainer" containerID="004acb044e08bf1158a6ff556a335167a581635956c4d59e781c43f328c08ddc" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.484370 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrmn2\" (UniqueName: \"kubernetes.io/projected/994ba1c7-5b42-457b-81fe-5dc11df7b170-kube-api-access-jrmn2\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.484402 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.484411 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994ba1c7-5b42-457b-81fe-5dc11df7b170-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.715803 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:44 crc kubenswrapper[5122]: I0224 00:21:44.723702 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zlxr9"] Feb 24 00:21:45 crc kubenswrapper[5122]: I0224 00:21:45.784511 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" path="/var/lib/kubelet/pods/994ba1c7-5b42-457b-81fe-5dc11df7b170/volumes" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.544449 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545672 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="extract-utilities" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545699 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="extract-utilities" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545752 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="registry-server" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545760 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="registry-server" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545772 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="extract-content" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545779 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="extract-content" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.545892 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="994ba1c7-5b42-457b-81fe-5dc11df7b170" containerName="registry-server" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.591968 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.592155 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.712726 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5fh\" (UniqueName: \"kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.712788 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.712835 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.814653 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5fh\" (UniqueName: \"kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.814704 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.814739 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.815256 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.815459 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.834806 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5fh\" (UniqueName: \"kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh\") pod \"community-operators-4swbx\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:46 crc kubenswrapper[5122]: I0224 00:21:46.907058 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:47 crc kubenswrapper[5122]: I0224 00:21:47.387522 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:21:47 crc kubenswrapper[5122]: W0224 00:21:47.392976 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8446a1b2_a999_46df_9f88_48793f82f831.slice/crio-497951c1c0e662b6ce6415bd8622ff8b327311d84b3e8da8744db5ffd0b91cd0 WatchSource:0}: Error finding container 497951c1c0e662b6ce6415bd8622ff8b327311d84b3e8da8744db5ffd0b91cd0: Status 404 returned error can't find the container with id 497951c1c0e662b6ce6415bd8622ff8b327311d84b3e8da8744db5ffd0b91cd0 Feb 24 00:21:47 crc kubenswrapper[5122]: I0224 00:21:47.408953 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerStarted","Data":"497951c1c0e662b6ce6415bd8622ff8b327311d84b3e8da8744db5ffd0b91cd0"} Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.406637 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-pg9j6"] Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.415885 5122 generic.go:358] "Generic (PLEG): container finished" podID="8446a1b2-a999-46df-9f88-48793f82f831" containerID="0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda" exitCode=0 Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.457345 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-pg9j6"] Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.457384 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerDied","Data":"0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda"} Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.457567 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.460045 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-k86r6\"" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.538562 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8r5\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-kube-api-access-4m8r5\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.538627 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-bound-sa-token\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.640437 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8r5\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-kube-api-access-4m8r5\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.640506 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-bound-sa-token\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.675749 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-bound-sa-token\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.683149 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8r5\" (UniqueName: \"kubernetes.io/projected/290a7e41-6a78-42f2-822a-bf2edea66a56-kube-api-access-4m8r5\") pod \"cert-manager-759f64656b-pg9j6\" (UID: \"290a7e41-6a78-42f2-822a-bf2edea66a56\") " pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.773885 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-pg9j6" Feb 24 00:21:48 crc kubenswrapper[5122]: I0224 00:21:48.871276 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:21:49 crc kubenswrapper[5122]: I0224 00:21:49.132867 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-pg9j6"] Feb 24 00:21:49 crc kubenswrapper[5122]: W0224 00:21:49.140173 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod290a7e41_6a78_42f2_822a_bf2edea66a56.slice/crio-7b63e8866561aaa724256019eb3a850ec27b0d0e873d4397b03072be78a017db WatchSource:0}: Error finding container 7b63e8866561aaa724256019eb3a850ec27b0d0e873d4397b03072be78a017db: Status 404 returned error can't find the container with id 7b63e8866561aaa724256019eb3a850ec27b0d0e873d4397b03072be78a017db Feb 24 00:21:49 crc kubenswrapper[5122]: I0224 00:21:49.424476 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-pg9j6" event={"ID":"290a7e41-6a78-42f2-822a-bf2edea66a56","Type":"ContainerStarted","Data":"bad10a9a9842fffe4ec88bc63fd2de92e6a9f59dcc096182ab5fd68db8a7cf8d"} Feb 24 00:21:49 crc kubenswrapper[5122]: I0224 00:21:49.424977 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-pg9j6" event={"ID":"290a7e41-6a78-42f2-822a-bf2edea66a56","Type":"ContainerStarted","Data":"7b63e8866561aaa724256019eb3a850ec27b0d0e873d4397b03072be78a017db"} Feb 24 00:21:49 crc kubenswrapper[5122]: I0224 00:21:49.427214 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerStarted","Data":"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35"} Feb 24 00:21:49 crc kubenswrapper[5122]: I0224 00:21:49.448929 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-pg9j6" podStartSLOduration=1.44890396 podStartE2EDuration="1.44890396s" podCreationTimestamp="2026-02-24 00:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:21:49.44274606 +0000 UTC m=+776.532200573" watchObservedRunningTime="2026-02-24 00:21:49.44890396 +0000 UTC m=+776.538358473" Feb 24 00:21:50 crc kubenswrapper[5122]: I0224 00:21:50.434984 5122 generic.go:358] "Generic (PLEG): container finished" podID="8446a1b2-a999-46df-9f88-48793f82f831" containerID="3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35" exitCode=0 Feb 24 00:21:50 crc kubenswrapper[5122]: I0224 00:21:50.435040 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerDied","Data":"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35"} Feb 24 00:21:51 crc kubenswrapper[5122]: I0224 00:21:51.444016 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerStarted","Data":"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312"} Feb 24 00:21:51 crc kubenswrapper[5122]: I0224 00:21:51.477610 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4swbx" podStartSLOduration=4.649407309 podStartE2EDuration="5.47758199s" podCreationTimestamp="2026-02-24 00:21:46 +0000 UTC" firstStartedPulling="2026-02-24 00:21:48.458313953 +0000 UTC m=+775.547768466" lastFinishedPulling="2026-02-24 00:21:49.286488634 +0000 UTC m=+776.375943147" observedRunningTime="2026-02-24 00:21:51.475183347 +0000 UTC m=+778.564637900" watchObservedRunningTime="2026-02-24 00:21:51.47758199 +0000 UTC m=+778.567036533" Feb 24 00:21:56 crc kubenswrapper[5122]: I0224 00:21:56.907555 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:56 crc kubenswrapper[5122]: I0224 00:21:56.908418 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:56 crc kubenswrapper[5122]: I0224 00:21:56.953975 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:57 crc kubenswrapper[5122]: I0224 00:21:57.115258 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:21:57 crc kubenswrapper[5122]: I0224 00:21:57.115342 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:21:57 crc kubenswrapper[5122]: I0224 00:21:57.534477 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:57 crc kubenswrapper[5122]: I0224 00:21:57.571045 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.508392 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4swbx" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="registry-server" containerID="cri-o://8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312" gracePeriod=2 Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.908673 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.931273 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities\") pod \"8446a1b2-a999-46df-9f88-48793f82f831\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.931328 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content\") pod \"8446a1b2-a999-46df-9f88-48793f82f831\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.931394 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5fh\" (UniqueName: \"kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh\") pod \"8446a1b2-a999-46df-9f88-48793f82f831\" (UID: \"8446a1b2-a999-46df-9f88-48793f82f831\") " Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.932608 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities" (OuterVolumeSpecName: "utilities") pod "8446a1b2-a999-46df-9f88-48793f82f831" (UID: "8446a1b2-a999-46df-9f88-48793f82f831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:21:59 crc kubenswrapper[5122]: I0224 00:21:59.968302 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh" (OuterVolumeSpecName: "kube-api-access-zl5fh") pod "8446a1b2-a999-46df-9f88-48793f82f831" (UID: "8446a1b2-a999-46df-9f88-48793f82f831"). InnerVolumeSpecName "kube-api-access-zl5fh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.001700 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8446a1b2-a999-46df-9f88-48793f82f831" (UID: "8446a1b2-a999-46df-9f88-48793f82f831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.032339 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.032369 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8446a1b2-a999-46df-9f88-48793f82f831-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.032428 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zl5fh\" (UniqueName: \"kubernetes.io/projected/8446a1b2-a999-46df-9f88-48793f82f831-kube-api-access-zl5fh\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.139413 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531542-n5v8b"] Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140408 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="extract-content" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140435 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="extract-content" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140457 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="registry-server" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140466 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="registry-server" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140486 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="extract-utilities" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140494 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="extract-utilities" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.140657 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="8446a1b2-a999-46df-9f88-48793f82f831" containerName="registry-server" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.223996 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531542-n5v8b"] Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.224199 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.228382 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.229770 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.231178 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.235264 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwsv\" (UniqueName: \"kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv\") pod \"auto-csr-approver-29531542-n5v8b\" (UID: \"f7d5aff6-468e-4c2e-9115-e557c69f5947\") " pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.337140 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwsv\" (UniqueName: \"kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv\") pod \"auto-csr-approver-29531542-n5v8b\" (UID: \"f7d5aff6-468e-4c2e-9115-e557c69f5947\") " pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.366571 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwsv\" (UniqueName: \"kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv\") pod \"auto-csr-approver-29531542-n5v8b\" (UID: \"f7d5aff6-468e-4c2e-9115-e557c69f5947\") " pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.519570 5122 generic.go:358] "Generic (PLEG): container finished" podID="8446a1b2-a999-46df-9f88-48793f82f831" containerID="8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312" exitCode=0 Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.519682 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swbx" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.519799 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerDied","Data":"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312"} Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.519951 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swbx" event={"ID":"8446a1b2-a999-46df-9f88-48793f82f831","Type":"ContainerDied","Data":"497951c1c0e662b6ce6415bd8622ff8b327311d84b3e8da8744db5ffd0b91cd0"} Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.519993 5122 scope.go:117] "RemoveContainer" containerID="8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.547535 5122 scope.go:117] "RemoveContainer" containerID="3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.553990 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.564130 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.570079 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4swbx"] Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.580933 5122 scope.go:117] "RemoveContainer" containerID="0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.598579 5122 scope.go:117] "RemoveContainer" containerID="8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312" Feb 24 00:22:00 crc kubenswrapper[5122]: E0224 00:22:00.599078 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312\": container with ID starting with 8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312 not found: ID does not exist" containerID="8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.599170 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312"} err="failed to get container status \"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312\": rpc error: code = NotFound desc = could not find container \"8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312\": container with ID starting with 8385c81b30cb14a881737a13a78468c976d25b461d36c5e7696c5851c6be9312 not found: ID does not exist" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.599197 5122 scope.go:117] "RemoveContainer" containerID="3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35" Feb 24 00:22:00 crc kubenswrapper[5122]: E0224 00:22:00.599753 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35\": container with ID starting with 3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35 not found: ID does not exist" containerID="3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.599780 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35"} err="failed to get container status \"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35\": rpc error: code = NotFound desc = could not find container \"3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35\": container with ID starting with 3a8f7252d257f7ca2a82a9405c44698ee2a2d135c694fb5082e8bcf179683d35 not found: ID does not exist" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.599794 5122 scope.go:117] "RemoveContainer" containerID="0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda" Feb 24 00:22:00 crc kubenswrapper[5122]: E0224 00:22:00.600091 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda\": container with ID starting with 0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda not found: ID does not exist" containerID="0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.600141 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda"} err="failed to get container status \"0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda\": rpc error: code = NotFound desc = could not find container \"0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda\": container with ID starting with 0863009b172969807a4b09ad3ed9388fb62f7ebf612d95f4573c658a4fbf0fda not found: ID does not exist" Feb 24 00:22:00 crc kubenswrapper[5122]: I0224 00:22:00.767317 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531542-n5v8b"] Feb 24 00:22:01 crc kubenswrapper[5122]: I0224 00:22:01.530149 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" event={"ID":"f7d5aff6-468e-4c2e-9115-e557c69f5947","Type":"ContainerStarted","Data":"356ab7be053b26c174296e2cdb8524579ae2ef0633bc1a89e3cd6ce8fead6647"} Feb 24 00:22:01 crc kubenswrapper[5122]: I0224 00:22:01.789951 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8446a1b2-a999-46df-9f88-48793f82f831" path="/var/lib/kubelet/pods/8446a1b2-a999-46df-9f88-48793f82f831/volumes" Feb 24 00:22:02 crc kubenswrapper[5122]: I0224 00:22:02.538117 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" event={"ID":"f7d5aff6-468e-4c2e-9115-e557c69f5947","Type":"ContainerStarted","Data":"17e83be981f50909a50c6164c94d957bb7e211e9405694005686092f793b58a9"} Feb 24 00:22:02 crc kubenswrapper[5122]: I0224 00:22:02.556947 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" podStartSLOduration=1.124410935 podStartE2EDuration="2.556926046s" podCreationTimestamp="2026-02-24 00:22:00 +0000 UTC" firstStartedPulling="2026-02-24 00:22:00.769855364 +0000 UTC m=+787.859309877" lastFinishedPulling="2026-02-24 00:22:02.202370465 +0000 UTC m=+789.291824988" observedRunningTime="2026-02-24 00:22:02.553448404 +0000 UTC m=+789.642902917" watchObservedRunningTime="2026-02-24 00:22:02.556926046 +0000 UTC m=+789.646380559" Feb 24 00:22:03 crc kubenswrapper[5122]: I0224 00:22:03.545694 5122 generic.go:358] "Generic (PLEG): container finished" podID="f7d5aff6-468e-4c2e-9115-e557c69f5947" containerID="17e83be981f50909a50c6164c94d957bb7e211e9405694005686092f793b58a9" exitCode=0 Feb 24 00:22:03 crc kubenswrapper[5122]: I0224 00:22:03.546061 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" event={"ID":"f7d5aff6-468e-4c2e-9115-e557c69f5947","Type":"ContainerDied","Data":"17e83be981f50909a50c6164c94d957bb7e211e9405694005686092f793b58a9"} Feb 24 00:22:04 crc kubenswrapper[5122]: I0224 00:22:04.824471 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:04 crc kubenswrapper[5122]: I0224 00:22:04.905553 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwsv\" (UniqueName: \"kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv\") pod \"f7d5aff6-468e-4c2e-9115-e557c69f5947\" (UID: \"f7d5aff6-468e-4c2e-9115-e557c69f5947\") " Feb 24 00:22:04 crc kubenswrapper[5122]: I0224 00:22:04.911714 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv" (OuterVolumeSpecName: "kube-api-access-5vwsv") pod "f7d5aff6-468e-4c2e-9115-e557c69f5947" (UID: "f7d5aff6-468e-4c2e-9115-e557c69f5947"). InnerVolumeSpecName "kube-api-access-5vwsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.006624 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vwsv\" (UniqueName: \"kubernetes.io/projected/f7d5aff6-468e-4c2e-9115-e557c69f5947-kube-api-access-5vwsv\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.562402 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" event={"ID":"f7d5aff6-468e-4c2e-9115-e557c69f5947","Type":"ContainerDied","Data":"356ab7be053b26c174296e2cdb8524579ae2ef0633bc1a89e3cd6ce8fead6647"} Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.562450 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="356ab7be053b26c174296e2cdb8524579ae2ef0633bc1a89e3cd6ce8fead6647" Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.562413 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531542-n5v8b" Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.623375 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531536-r64fm"] Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.627767 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531536-r64fm"] Feb 24 00:22:05 crc kubenswrapper[5122]: I0224 00:22:05.782477 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f68066-6c73-4adf-b332-e4c155644702" path="/var/lib/kubelet/pods/91f68066-6c73-4adf-b332-e4c155644702/volumes" Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.114985 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.115501 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.115546 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.116012 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.116091 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a" gracePeriod=600 Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.724139 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a" exitCode=0 Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.724226 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a"} Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.724720 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b"} Feb 24 00:22:27 crc kubenswrapper[5122]: I0224 00:22:27.724742 5122 scope.go:117] "RemoveContainer" containerID="a2440177b838348268a0bef8a6e72892e9f62cf0d62c5963f5c3b068ced560cd" Feb 24 00:22:58 crc kubenswrapper[5122]: I0224 00:22:58.479933 5122 scope.go:117] "RemoveContainer" containerID="0d0957a3a775c1947f79875d3ae098e7d75ecd8ff6f157a8e21ec6653afc47f6" Feb 24 00:22:58 crc kubenswrapper[5122]: I0224 00:22:58.954621 5122 generic.go:358] "Generic (PLEG): container finished" podID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerID="98052cca4ce4852d410afe93a3d41bd059abca5796c66b6d2efefc5f6ecf72fe" exitCode=0 Feb 24 00:22:58 crc kubenswrapper[5122]: I0224 00:22:58.954692 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerDied","Data":"98052cca4ce4852d410afe93a3d41bd059abca5796c66b6d2efefc5f6ecf72fe"} Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.206220 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.318950 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319365 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319392 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319420 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319487 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319523 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319544 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319676 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319735 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319755 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319816 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2mz8\" (UniqueName: \"kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319859 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root\") pod \"45bf38ed-1ab3-4e4f-960f-19695f49f433\" (UID: \"45bf38ed-1ab3-4e4f-960f-19695f49f433\") " Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.319942 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.320192 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.320316 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.320331 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.320302 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.320763 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.322546 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.327618 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.328224 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8" (OuterVolumeSpecName: "kube-api-access-h2mz8") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "kube-api-access-h2mz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.328593 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.330389 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.356638 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.421939 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422187 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/45bf38ed-1ab3-4e4f-960f-19695f49f433-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422258 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422326 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2mz8\" (UniqueName: \"kubernetes.io/projected/45bf38ed-1ab3-4e4f-960f-19695f49f433-kube-api-access-h2mz8\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422393 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422445 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422533 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.422621 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.532738 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.624774 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.977127 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45bf38ed-1ab3-4e4f-960f-19695f49f433","Type":"ContainerDied","Data":"97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b"} Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.977168 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:23:00 crc kubenswrapper[5122]: I0224 00:23:00.977192 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a248be205efafa6e7d39c26a7faced8f8454ac1c7405ebab127eb89631c30b" Feb 24 00:23:02 crc kubenswrapper[5122]: I0224 00:23:02.394796 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45bf38ed-1ab3-4e4f-960f-19695f49f433" (UID: "45bf38ed-1ab3-4e4f-960f-19695f49f433"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:02 crc kubenswrapper[5122]: I0224 00:23:02.475978 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45bf38ed-1ab3-4e4f-960f-19695f49f433-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.923234 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924696 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="git-clone" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924726 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="git-clone" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924753 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7d5aff6-468e-4c2e-9115-e557c69f5947" containerName="oc" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924766 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d5aff6-468e-4c2e-9115-e557c69f5947" containerName="oc" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924785 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="manage-dockerfile" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924798 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="manage-dockerfile" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924821 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="docker-build" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924834 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="docker-build" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.924997 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7d5aff6-468e-4c2e-9115-e557c69f5947" containerName="oc" Feb 24 00:23:04 crc kubenswrapper[5122]: I0224 00:23:04.925022 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="45bf38ed-1ab3-4e4f-960f-19695f49f433" containerName="docker-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.513131 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.513302 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.515637 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.515645 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.515664 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.516504 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612638 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612697 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612766 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612831 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612908 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612936 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612969 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cs8\" (UniqueName: \"kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.612994 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.613017 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.613031 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.613051 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.613164 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714260 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714338 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714493 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714601 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714803 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714896 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.714997 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24cs8\" (UniqueName: \"kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715122 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715202 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715251 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715719 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715869 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.715957 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716034 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716252 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716529 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716620 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716727 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.716808 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.717140 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.717697 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.721907 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.730005 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.736262 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cs8\" (UniqueName: \"kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8\") pod \"smart-gateway-operator-1-build\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:05 crc kubenswrapper[5122]: I0224 00:23:05.832435 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:06 crc kubenswrapper[5122]: I0224 00:23:06.081619 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:07 crc kubenswrapper[5122]: I0224 00:23:07.023541 5122 generic.go:358] "Generic (PLEG): container finished" podID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerID="caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359" exitCode=0 Feb 24 00:23:07 crc kubenswrapper[5122]: I0224 00:23:07.023701 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9","Type":"ContainerDied","Data":"caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359"} Feb 24 00:23:07 crc kubenswrapper[5122]: I0224 00:23:07.023759 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9","Type":"ContainerStarted","Data":"361d3c5ee9f741450b0a51be11fbd2221032864078ccbef5a58f40e080a647e7"} Feb 24 00:23:08 crc kubenswrapper[5122]: I0224 00:23:08.033516 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9","Type":"ContainerStarted","Data":"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f"} Feb 24 00:23:08 crc kubenswrapper[5122]: I0224 00:23:08.080602 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.080569972 podStartE2EDuration="4.080569972s" podCreationTimestamp="2026-02-24 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:23:08.064551589 +0000 UTC m=+855.154006102" watchObservedRunningTime="2026-02-24 00:23:08.080569972 +0000 UTC m=+855.170024505" Feb 24 00:23:15 crc kubenswrapper[5122]: I0224 00:23:15.518913 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:15 crc kubenswrapper[5122]: I0224 00:23:15.519781 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="docker-build" containerID="cri-o://8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f" gracePeriod=30 Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.178577 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.191341 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.191873 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.196295 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.196373 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.207347 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319160 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319488 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319529 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmzj\" (UniqueName: \"kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319572 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319602 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319798 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319837 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319892 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.319954 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.320004 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.320029 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.320055 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.420959 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421016 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421057 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmzj\" (UniqueName: \"kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421174 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421201 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421423 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421963 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421743 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.421571 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422160 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422252 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422325 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422387 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422403 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422408 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422466 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422493 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422663 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.422827 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.423120 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.423157 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.427836 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.431398 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.440065 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmzj\" (UniqueName: \"kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj\") pod \"smart-gateway-operator-2-build\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.517361 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.736818 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:23:17 crc kubenswrapper[5122]: W0224 00:23:17.744369 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f81fe4_c66d_4774_8769_42617fd813cd.slice/crio-9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86 WatchSource:0}: Error finding container 9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86: Status 404 returned error can't find the container with id 9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86 Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.757846 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9/docker-build/0.log" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.758302 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827509 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827557 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cs8\" (UniqueName: \"kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827585 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827600 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827706 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827701 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827744 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827760 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827772 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827792 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827830 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827910 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827938 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.827966 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root\") pod \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\" (UID: \"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9\") " Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.828204 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.828222 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.828556 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.828971 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.829204 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.829299 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.829308 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.832302 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.832482 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.832699 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8" (OuterVolumeSpecName: "kube-api-access-24cs8") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "kube-api-access-24cs8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.834708 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929496 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929538 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929554 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929565 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929577 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24cs8\" (UniqueName: \"kubernetes.io/projected/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-kube-api-access-24cs8\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929589 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929601 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929612 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.929623 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[5122]: I0224 00:23:17.980944 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" (UID: "55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.030767 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.110594 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9/docker-build/0.log" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.110958 5122 generic.go:358] "Generic (PLEG): container finished" podID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerID="8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f" exitCode=1 Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.111036 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.111104 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9","Type":"ContainerDied","Data":"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f"} Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.111166 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9","Type":"ContainerDied","Data":"361d3c5ee9f741450b0a51be11fbd2221032864078ccbef5a58f40e080a647e7"} Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.111190 5122 scope.go:117] "RemoveContainer" containerID="8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.114325 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerStarted","Data":"9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86"} Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.146573 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.151973 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.193308 5122 scope.go:117] "RemoveContainer" containerID="caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.485688 5122 scope.go:117] "RemoveContainer" containerID="8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f" Feb 24 00:23:18 crc kubenswrapper[5122]: E0224 00:23:18.486356 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f\": container with ID starting with 8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f not found: ID does not exist" containerID="8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.486422 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f"} err="failed to get container status \"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f\": rpc error: code = NotFound desc = could not find container \"8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f\": container with ID starting with 8f5597175b8a9d7c6f92dd6b4bb7b5e707d78d30e6f4ac626203095dcc03578f not found: ID does not exist" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.486455 5122 scope.go:117] "RemoveContainer" containerID="caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359" Feb 24 00:23:18 crc kubenswrapper[5122]: E0224 00:23:18.486839 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359\": container with ID starting with caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359 not found: ID does not exist" containerID="caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359" Feb 24 00:23:18 crc kubenswrapper[5122]: I0224 00:23:18.486906 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359"} err="failed to get container status \"caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359\": rpc error: code = NotFound desc = could not find container \"caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359\": container with ID starting with caeb1050f20c9aa6dba55d76f304c9fc60b833fe421b4b719f72717668f8b359 not found: ID does not exist" Feb 24 00:23:19 crc kubenswrapper[5122]: I0224 00:23:19.133116 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerStarted","Data":"7922932ad4e5de94a7beca0a08e43f9415663c1973536caf241d1b9a69eccbac"} Feb 24 00:23:19 crc kubenswrapper[5122]: I0224 00:23:19.788198 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" path="/var/lib/kubelet/pods/55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9/volumes" Feb 24 00:23:20 crc kubenswrapper[5122]: I0224 00:23:20.143605 5122 generic.go:358] "Generic (PLEG): container finished" podID="14f81fe4-c66d-4774-8769-42617fd813cd" containerID="7922932ad4e5de94a7beca0a08e43f9415663c1973536caf241d1b9a69eccbac" exitCode=0 Feb 24 00:23:20 crc kubenswrapper[5122]: I0224 00:23:20.143699 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerDied","Data":"7922932ad4e5de94a7beca0a08e43f9415663c1973536caf241d1b9a69eccbac"} Feb 24 00:23:21 crc kubenswrapper[5122]: I0224 00:23:21.155924 5122 generic.go:358] "Generic (PLEG): container finished" podID="14f81fe4-c66d-4774-8769-42617fd813cd" containerID="a1b6cfe96017bab0d3c224141e6bfe7083dd5b39fcac9bd9926ddd73b1767b86" exitCode=0 Feb 24 00:23:21 crc kubenswrapper[5122]: I0224 00:23:21.156108 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerDied","Data":"a1b6cfe96017bab0d3c224141e6bfe7083dd5b39fcac9bd9926ddd73b1767b86"} Feb 24 00:23:21 crc kubenswrapper[5122]: I0224 00:23:21.221991 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_14f81fe4-c66d-4774-8769-42617fd813cd/manage-dockerfile/0.log" Feb 24 00:23:22 crc kubenswrapper[5122]: I0224 00:23:22.165513 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerStarted","Data":"67ec1062940ffc0f421aac25b44dafcad4f7be4580733ce252db3e5de5061ba7"} Feb 24 00:23:22 crc kubenswrapper[5122]: I0224 00:23:22.204848 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.204833649 podStartE2EDuration="5.204833649s" podCreationTimestamp="2026-02-24 00:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:23:22.20222849 +0000 UTC m=+869.291683013" watchObservedRunningTime="2026-02-24 00:23:22.204833649 +0000 UTC m=+869.294288162" Feb 24 00:23:54 crc kubenswrapper[5122]: I0224 00:23:54.181907 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:23:54 crc kubenswrapper[5122]: I0224 00:23:54.187235 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:23:54 crc kubenswrapper[5122]: I0224 00:23:54.193145 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:23:54 crc kubenswrapper[5122]: I0224 00:23:54.206949 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.134171 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531544-dqz2m"] Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.135565 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="docker-build" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.135586 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="docker-build" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.135623 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="manage-dockerfile" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.135631 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="manage-dockerfile" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.135754 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="55d5ab5c-0dc7-4c2e-b90d-1e7df8d0f5b9" containerName="docker-build" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.838955 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531544-dqz2m"] Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.839112 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.841846 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.842124 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.841932 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:24:00 crc kubenswrapper[5122]: I0224 00:24:00.982830 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffkkm\" (UniqueName: \"kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm\") pod \"auto-csr-approver-29531544-dqz2m\" (UID: \"dd48b1e9-6fd3-4efc-9af6-99b06c47715b\") " pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:01 crc kubenswrapper[5122]: I0224 00:24:01.084787 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffkkm\" (UniqueName: \"kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm\") pod \"auto-csr-approver-29531544-dqz2m\" (UID: \"dd48b1e9-6fd3-4efc-9af6-99b06c47715b\") " pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:01 crc kubenswrapper[5122]: I0224 00:24:01.113024 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffkkm\" (UniqueName: \"kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm\") pod \"auto-csr-approver-29531544-dqz2m\" (UID: \"dd48b1e9-6fd3-4efc-9af6-99b06c47715b\") " pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:01 crc kubenswrapper[5122]: I0224 00:24:01.169832 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:01 crc kubenswrapper[5122]: I0224 00:24:01.630046 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531544-dqz2m"] Feb 24 00:24:01 crc kubenswrapper[5122]: W0224 00:24:01.635231 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd48b1e9_6fd3_4efc_9af6_99b06c47715b.slice/crio-c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c WatchSource:0}: Error finding container c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c: Status 404 returned error can't find the container with id c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c Feb 24 00:24:02 crc kubenswrapper[5122]: I0224 00:24:02.470203 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" event={"ID":"dd48b1e9-6fd3-4efc-9af6-99b06c47715b","Type":"ContainerStarted","Data":"c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c"} Feb 24 00:24:03 crc kubenswrapper[5122]: I0224 00:24:03.482115 5122 generic.go:358] "Generic (PLEG): container finished" podID="dd48b1e9-6fd3-4efc-9af6-99b06c47715b" containerID="793f2270d6ccd6dd8f42f9e09f125ada03f4189dc85052e06ab4820daf36011c" exitCode=0 Feb 24 00:24:03 crc kubenswrapper[5122]: I0224 00:24:03.482220 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" event={"ID":"dd48b1e9-6fd3-4efc-9af6-99b06c47715b","Type":"ContainerDied","Data":"793f2270d6ccd6dd8f42f9e09f125ada03f4189dc85052e06ab4820daf36011c"} Feb 24 00:24:04 crc kubenswrapper[5122]: I0224 00:24:04.833747 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:04 crc kubenswrapper[5122]: I0224 00:24:04.944759 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffkkm\" (UniqueName: \"kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm\") pod \"dd48b1e9-6fd3-4efc-9af6-99b06c47715b\" (UID: \"dd48b1e9-6fd3-4efc-9af6-99b06c47715b\") " Feb 24 00:24:04 crc kubenswrapper[5122]: I0224 00:24:04.959343 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm" (OuterVolumeSpecName: "kube-api-access-ffkkm") pod "dd48b1e9-6fd3-4efc-9af6-99b06c47715b" (UID: "dd48b1e9-6fd3-4efc-9af6-99b06c47715b"). InnerVolumeSpecName "kube-api-access-ffkkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.046369 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffkkm\" (UniqueName: \"kubernetes.io/projected/dd48b1e9-6fd3-4efc-9af6-99b06c47715b-kube-api-access-ffkkm\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.500024 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" event={"ID":"dd48b1e9-6fd3-4efc-9af6-99b06c47715b","Type":"ContainerDied","Data":"c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c"} Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.500403 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6fe17bd1ff794fcd7fcd8288b4ef1c246047cdf7b99b4ad58ca58180e0bd73c" Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.500149 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531544-dqz2m" Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.906124 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531538-ch8p9"] Feb 24 00:24:05 crc kubenswrapper[5122]: I0224 00:24:05.911949 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531538-ch8p9"] Feb 24 00:24:07 crc kubenswrapper[5122]: I0224 00:24:07.782767 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77aac90-1868-4c57-8629-c69449252bd9" path="/var/lib/kubelet/pods/f77aac90-1868-4c57-8629-c69449252bd9/volumes" Feb 24 00:24:26 crc kubenswrapper[5122]: I0224 00:24:26.651816 5122 generic.go:358] "Generic (PLEG): container finished" podID="14f81fe4-c66d-4774-8769-42617fd813cd" containerID="67ec1062940ffc0f421aac25b44dafcad4f7be4580733ce252db3e5de5061ba7" exitCode=0 Feb 24 00:24:26 crc kubenswrapper[5122]: I0224 00:24:26.651951 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerDied","Data":"67ec1062940ffc0f421aac25b44dafcad4f7be4580733ce252db3e5de5061ba7"} Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.115471 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.115584 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.887652 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.979898 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.979986 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980006 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980029 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980235 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980298 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980357 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980390 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980417 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980464 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980536 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980595 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwmzj\" (UniqueName: \"kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980635 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.980705 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles\") pod \"14f81fe4-c66d-4774-8769-42617fd813cd\" (UID: \"14f81fe4-c66d-4774-8769-42617fd813cd\") " Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.981383 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.981418 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/14f81fe4-c66d-4774-8769-42617fd813cd-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.981477 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.981737 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.981975 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.982477 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.985826 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.987219 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj" (OuterVolumeSpecName: "kube-api-access-gwmzj") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "kube-api-access-gwmzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.988022 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:24:27 crc kubenswrapper[5122]: I0224 00:24:27.990368 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082723 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082804 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082859 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwmzj\" (UniqueName: \"kubernetes.io/projected/14f81fe4-c66d-4774-8769-42617fd813cd-kube-api-access-gwmzj\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082883 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082946 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f81fe4-c66d-4774-8769-42617fd813cd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082968 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.082990 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/14f81fe4-c66d-4774-8769-42617fd813cd-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.083013 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.225223 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.286491 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.668703 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"14f81fe4-c66d-4774-8769-42617fd813cd","Type":"ContainerDied","Data":"9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86"} Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.668751 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9072b768dfbc7b1fcee6427dd9cabd5c2c77814fc8203cf92c99a49b3d270a86" Feb 24 00:24:28 crc kubenswrapper[5122]: I0224 00:24:28.668748 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:24:30 crc kubenswrapper[5122]: I0224 00:24:30.313740 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "14f81fe4-c66d-4774-8769-42617fd813cd" (UID: "14f81fe4-c66d-4774-8769-42617fd813cd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:30 crc kubenswrapper[5122]: I0224 00:24:30.316835 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/14f81fe4-c66d-4774-8769-42617fd813cd-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.576204 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.576952 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="manage-dockerfile" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.576970 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="manage-dockerfile" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.576995 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="docker-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577003 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="docker-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577029 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd48b1e9-6fd3-4efc-9af6-99b06c47715b" containerName="oc" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577038 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd48b1e9-6fd3-4efc-9af6-99b06c47715b" containerName="oc" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577053 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="git-clone" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577061 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="git-clone" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577211 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd48b1e9-6fd3-4efc-9af6-99b06c47715b" containerName="oc" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.577227 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="14f81fe4-c66d-4774-8769-42617fd813cd" containerName="docker-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.664569 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.664705 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.666795 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.667751 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.667915 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.668355 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752238 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752286 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752316 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752344 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752375 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752461 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752489 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752512 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752577 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752601 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752630 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.752674 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.853992 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854105 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854160 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854228 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854311 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854355 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854398 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854436 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854490 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854496 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854543 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854589 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854628 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854650 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854734 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.854936 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.855178 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.855712 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.855837 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.855861 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.856690 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.861181 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.861194 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.885235 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr\") pod \"sg-core-1-build\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:24:32 crc kubenswrapper[5122]: I0224 00:24:32.978363 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:24:33 crc kubenswrapper[5122]: I0224 00:24:33.242669 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:33 crc kubenswrapper[5122]: I0224 00:24:33.252902 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:24:33 crc kubenswrapper[5122]: I0224 00:24:33.712582 5122 generic.go:358] "Generic (PLEG): container finished" podID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerID="4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e" exitCode=0 Feb 24 00:24:33 crc kubenswrapper[5122]: I0224 00:24:33.712679 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8a999bf9-3862-4f76-8d16-ad570dad06dc","Type":"ContainerDied","Data":"4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e"} Feb 24 00:24:33 crc kubenswrapper[5122]: I0224 00:24:33.712705 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8a999bf9-3862-4f76-8d16-ad570dad06dc","Type":"ContainerStarted","Data":"069258f0a44fbd1d8b759ec4465980d57a7602d136589c169c002ed86e8459af"} Feb 24 00:24:34 crc kubenswrapper[5122]: I0224 00:24:34.724898 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8a999bf9-3862-4f76-8d16-ad570dad06dc","Type":"ContainerStarted","Data":"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312"} Feb 24 00:24:34 crc kubenswrapper[5122]: I0224 00:24:34.754262 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.754239782 podStartE2EDuration="2.754239782s" podCreationTimestamp="2026-02-24 00:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:24:34.753476132 +0000 UTC m=+941.842930665" watchObservedRunningTime="2026-02-24 00:24:34.754239782 +0000 UTC m=+941.843694305" Feb 24 00:24:42 crc kubenswrapper[5122]: I0224 00:24:42.955648 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:42 crc kubenswrapper[5122]: I0224 00:24:42.956550 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="docker-build" containerID="cri-o://a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312" gracePeriod=30 Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.379534 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_8a999bf9-3862-4f76-8d16-ad570dad06dc/docker-build/0.log" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.379862 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509065 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509151 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509230 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509309 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509399 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509457 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509817 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509895 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.510008 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.509925 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.510400 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.510098 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.510316 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.510358 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511292 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511362 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511402 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511434 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache\") pod \"8a999bf9-3862-4f76-8d16-ad570dad06dc\" (UID: \"8a999bf9-3862-4f76-8d16-ad570dad06dc\") " Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511891 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511912 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511931 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511947 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511964 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.511982 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a999bf9-3862-4f76-8d16-ad570dad06dc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.512579 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.515643 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.515699 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr" (OuterVolumeSpecName: "kube-api-access-qvpcr") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "kube-api-access-qvpcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.515688 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.613530 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/8a999bf9-3862-4f76-8d16-ad570dad06dc-kube-api-access-qvpcr\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.613575 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.613586 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/8a999bf9-3862-4f76-8d16-ad570dad06dc-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.613594 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.631183 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.715170 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.756470 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8a999bf9-3862-4f76-8d16-ad570dad06dc" (UID: "8a999bf9-3862-4f76-8d16-ad570dad06dc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796401 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_8a999bf9-3862-4f76-8d16-ad570dad06dc/docker-build/0.log" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796700 5122 generic.go:358] "Generic (PLEG): container finished" podID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerID="a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312" exitCode=1 Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796802 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8a999bf9-3862-4f76-8d16-ad570dad06dc","Type":"ContainerDied","Data":"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312"} Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796822 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796851 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8a999bf9-3862-4f76-8d16-ad570dad06dc","Type":"ContainerDied","Data":"069258f0a44fbd1d8b759ec4465980d57a7602d136589c169c002ed86e8459af"} Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.796871 5122 scope.go:117] "RemoveContainer" containerID="a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.816155 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a999bf9-3862-4f76-8d16-ad570dad06dc-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.841738 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.847324 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.850149 5122 scope.go:117] "RemoveContainer" containerID="4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.915170 5122 scope.go:117] "RemoveContainer" containerID="a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312" Feb 24 00:24:43 crc kubenswrapper[5122]: E0224 00:24:43.915606 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312\": container with ID starting with a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312 not found: ID does not exist" containerID="a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.915656 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312"} err="failed to get container status \"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312\": rpc error: code = NotFound desc = could not find container \"a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312\": container with ID starting with a313c89a7ac624f781cadb0d1a434f9ebe1bee4eca3ffe11dc68978a33784312 not found: ID does not exist" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.915691 5122 scope.go:117] "RemoveContainer" containerID="4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e" Feb 24 00:24:43 crc kubenswrapper[5122]: E0224 00:24:43.918557 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e\": container with ID starting with 4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e not found: ID does not exist" containerID="4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e" Feb 24 00:24:43 crc kubenswrapper[5122]: I0224 00:24:43.918604 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e"} err="failed to get container status \"4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e\": rpc error: code = NotFound desc = could not find container \"4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e\": container with ID starting with 4dcb367341ebaf635bfaad8a9967b580e2d6bdd68a85e9cde5da52ca7690725e not found: ID does not exist" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.607285 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.608139 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="docker-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.608162 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="docker-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.608181 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="manage-dockerfile" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.608191 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="manage-dockerfile" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.608401 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" containerName="docker-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.686591 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.686724 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.688702 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.688760 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.689507 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.689918 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828355 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828423 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828596 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828668 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tq67\" (UniqueName: \"kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828953 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.828992 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829025 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829116 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829335 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829390 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829492 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.829533 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931113 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931448 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931478 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931501 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931547 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931574 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tq67\" (UniqueName: \"kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931603 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931623 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931647 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931671 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931745 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931781 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.931857 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932211 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932746 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932845 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932886 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932853 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.932931 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.933023 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.933835 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.938845 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.939663 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:44 crc kubenswrapper[5122]: I0224 00:24:44.958982 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tq67\" (UniqueName: \"kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67\") pod \"sg-core-2-build\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:24:45 crc kubenswrapper[5122]: I0224 00:24:44.999984 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:24:45 crc kubenswrapper[5122]: I0224 00:24:45.217340 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:24:45 crc kubenswrapper[5122]: I0224 00:24:45.782341 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a999bf9-3862-4f76-8d16-ad570dad06dc" path="/var/lib/kubelet/pods/8a999bf9-3862-4f76-8d16-ad570dad06dc/volumes" Feb 24 00:24:45 crc kubenswrapper[5122]: I0224 00:24:45.820146 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerStarted","Data":"b0647f1b61f9ed634b6a046d2a70ef762a8a75bcb9cefb4271e74c93cb474f69"} Feb 24 00:24:45 crc kubenswrapper[5122]: I0224 00:24:45.820243 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerStarted","Data":"e65c9ef09bc99d468175748a5f61b07f9b9ea9a277ecd81d08d87fd9b09d9f59"} Feb 24 00:24:46 crc kubenswrapper[5122]: I0224 00:24:46.831058 5122 generic.go:358] "Generic (PLEG): container finished" podID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerID="b0647f1b61f9ed634b6a046d2a70ef762a8a75bcb9cefb4271e74c93cb474f69" exitCode=0 Feb 24 00:24:46 crc kubenswrapper[5122]: I0224 00:24:46.831179 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerDied","Data":"b0647f1b61f9ed634b6a046d2a70ef762a8a75bcb9cefb4271e74c93cb474f69"} Feb 24 00:24:47 crc kubenswrapper[5122]: I0224 00:24:47.841495 5122 generic.go:358] "Generic (PLEG): container finished" podID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerID="41fada879d0f1336f54f7192c32707654c4c15094c07fe422fd970d9f58196a7" exitCode=0 Feb 24 00:24:47 crc kubenswrapper[5122]: I0224 00:24:47.841630 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerDied","Data":"41fada879d0f1336f54f7192c32707654c4c15094c07fe422fd970d9f58196a7"} Feb 24 00:24:47 crc kubenswrapper[5122]: I0224 00:24:47.884320 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_6ead887b-19da-46b8-af4b-e091bce80ed0/manage-dockerfile/0.log" Feb 24 00:24:48 crc kubenswrapper[5122]: I0224 00:24:48.852743 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerStarted","Data":"b6462b59ee575c62ee31e91cade89fbe2ee9de1bc3372cb6fde79dfbc4590ac9"} Feb 24 00:24:48 crc kubenswrapper[5122]: I0224 00:24:48.909744 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.909721779 podStartE2EDuration="4.909721779s" podCreationTimestamp="2026-02-24 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:24:48.905095016 +0000 UTC m=+955.994549559" watchObservedRunningTime="2026-02-24 00:24:48.909721779 +0000 UTC m=+955.999176302" Feb 24 00:24:57 crc kubenswrapper[5122]: I0224 00:24:57.116094 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:24:57 crc kubenswrapper[5122]: I0224 00:24:57.117047 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:24:58 crc kubenswrapper[5122]: I0224 00:24:58.631999 5122 scope.go:117] "RemoveContainer" containerID="980f8f96bec9f8dc7b954ca06e40b86141f817f3b24633aad1ecce6e7d420148" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.145501 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.362629 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.362879 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.500197 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gclvw\" (UniqueName: \"kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.500273 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.500414 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.602203 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gclvw\" (UniqueName: \"kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.602255 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.602340 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.602802 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.602863 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.634915 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gclvw\" (UniqueName: \"kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw\") pod \"redhat-operators-g94v7\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:22 crc kubenswrapper[5122]: I0224 00:25:22.683094 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:23 crc kubenswrapper[5122]: I0224 00:25:23.101599 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:23 crc kubenswrapper[5122]: W0224 00:25:23.111942 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d95d7c4_51e6_4953_a84c_79d674a75178.slice/crio-37679c906872b3fdee11a025cac6e9c445128afc0299ab13e7372cbdbf7c3d6a WatchSource:0}: Error finding container 37679c906872b3fdee11a025cac6e9c445128afc0299ab13e7372cbdbf7c3d6a: Status 404 returned error can't find the container with id 37679c906872b3fdee11a025cac6e9c445128afc0299ab13e7372cbdbf7c3d6a Feb 24 00:25:24 crc kubenswrapper[5122]: I0224 00:25:24.111889 5122 generic.go:358] "Generic (PLEG): container finished" podID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerID="dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9" exitCode=0 Feb 24 00:25:24 crc kubenswrapper[5122]: I0224 00:25:24.112501 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerDied","Data":"dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9"} Feb 24 00:25:24 crc kubenswrapper[5122]: I0224 00:25:24.112544 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerStarted","Data":"37679c906872b3fdee11a025cac6e9c445128afc0299ab13e7372cbdbf7c3d6a"} Feb 24 00:25:25 crc kubenswrapper[5122]: I0224 00:25:25.123693 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerStarted","Data":"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f"} Feb 24 00:25:26 crc kubenswrapper[5122]: I0224 00:25:26.131017 5122 generic.go:358] "Generic (PLEG): container finished" podID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerID="c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f" exitCode=0 Feb 24 00:25:26 crc kubenswrapper[5122]: I0224 00:25:26.131117 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerDied","Data":"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f"} Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.115277 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.115844 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.116019 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.116756 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.116906 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b" gracePeriod=600 Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.140557 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerStarted","Data":"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6"} Feb 24 00:25:27 crc kubenswrapper[5122]: I0224 00:25:27.161638 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g94v7" podStartSLOduration=4.452545113 podStartE2EDuration="5.16161626s" podCreationTimestamp="2026-02-24 00:25:22 +0000 UTC" firstStartedPulling="2026-02-24 00:25:24.113494655 +0000 UTC m=+991.202949178" lastFinishedPulling="2026-02-24 00:25:24.822565812 +0000 UTC m=+991.912020325" observedRunningTime="2026-02-24 00:25:27.158773284 +0000 UTC m=+994.248227817" watchObservedRunningTime="2026-02-24 00:25:27.16161626 +0000 UTC m=+994.251070783" Feb 24 00:25:28 crc kubenswrapper[5122]: I0224 00:25:28.151222 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b" exitCode=0 Feb 24 00:25:28 crc kubenswrapper[5122]: I0224 00:25:28.151286 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b"} Feb 24 00:25:28 crc kubenswrapper[5122]: I0224 00:25:28.151913 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434"} Feb 24 00:25:28 crc kubenswrapper[5122]: I0224 00:25:28.151934 5122 scope.go:117] "RemoveContainer" containerID="261340b5f7b11a4ce4a9ff704d0d02ee8484c6e0b40d48b9b50e904a701a287a" Feb 24 00:25:32 crc kubenswrapper[5122]: I0224 00:25:32.684256 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:32 crc kubenswrapper[5122]: I0224 00:25:32.684846 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:32 crc kubenswrapper[5122]: I0224 00:25:32.727835 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:33 crc kubenswrapper[5122]: I0224 00:25:33.228137 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:33 crc kubenswrapper[5122]: I0224 00:25:33.269008 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:35 crc kubenswrapper[5122]: I0224 00:25:35.197782 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g94v7" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="registry-server" containerID="cri-o://803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6" gracePeriod=2 Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.173381 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.206193 5122 generic.go:358] "Generic (PLEG): container finished" podID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerID="803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6" exitCode=0 Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.206470 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerDied","Data":"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6"} Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.206504 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g94v7" event={"ID":"0d95d7c4-51e6-4953-a84c-79d674a75178","Type":"ContainerDied","Data":"37679c906872b3fdee11a025cac6e9c445128afc0299ab13e7372cbdbf7c3d6a"} Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.206527 5122 scope.go:117] "RemoveContainer" containerID="803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.206687 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g94v7" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.226471 5122 scope.go:117] "RemoveContainer" containerID="c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.241486 5122 scope.go:117] "RemoveContainer" containerID="dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.259699 5122 scope.go:117] "RemoveContainer" containerID="803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6" Feb 24 00:25:36 crc kubenswrapper[5122]: E0224 00:25:36.260135 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6\": container with ID starting with 803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6 not found: ID does not exist" containerID="803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.260169 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6"} err="failed to get container status \"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6\": rpc error: code = NotFound desc = could not find container \"803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6\": container with ID starting with 803637fd9062b77f982d09b4913891814c15c7675e36a99f5182efe680e63ab6 not found: ID does not exist" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.260191 5122 scope.go:117] "RemoveContainer" containerID="c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f" Feb 24 00:25:36 crc kubenswrapper[5122]: E0224 00:25:36.260599 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f\": container with ID starting with c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f not found: ID does not exist" containerID="c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.260626 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f"} err="failed to get container status \"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f\": rpc error: code = NotFound desc = could not find container \"c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f\": container with ID starting with c9b9129756a0c1653c8580b4bd4d633125f4998fde63ad9eeee36ac78e16ed7f not found: ID does not exist" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.260643 5122 scope.go:117] "RemoveContainer" containerID="dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9" Feb 24 00:25:36 crc kubenswrapper[5122]: E0224 00:25:36.261255 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9\": container with ID starting with dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9 not found: ID does not exist" containerID="dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.261277 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9"} err="failed to get container status \"dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9\": rpc error: code = NotFound desc = could not find container \"dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9\": container with ID starting with dd87fec872cd66c6f4e8019e85ac9fcc09a2f50dc928dbcf2d443f8c543638c9 not found: ID does not exist" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.297967 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities\") pod \"0d95d7c4-51e6-4953-a84c-79d674a75178\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.298152 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gclvw\" (UniqueName: \"kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw\") pod \"0d95d7c4-51e6-4953-a84c-79d674a75178\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.298301 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content\") pod \"0d95d7c4-51e6-4953-a84c-79d674a75178\" (UID: \"0d95d7c4-51e6-4953-a84c-79d674a75178\") " Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.300345 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities" (OuterVolumeSpecName: "utilities") pod "0d95d7c4-51e6-4953-a84c-79d674a75178" (UID: "0d95d7c4-51e6-4953-a84c-79d674a75178"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.304659 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw" (OuterVolumeSpecName: "kube-api-access-gclvw") pod "0d95d7c4-51e6-4953-a84c-79d674a75178" (UID: "0d95d7c4-51e6-4953-a84c-79d674a75178"). InnerVolumeSpecName "kube-api-access-gclvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.400128 5122 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.400182 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gclvw\" (UniqueName: \"kubernetes.io/projected/0d95d7c4-51e6-4953-a84c-79d674a75178-kube-api-access-gclvw\") on node \"crc\" DevicePath \"\"" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.401572 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d95d7c4-51e6-4953-a84c-79d674a75178" (UID: "0d95d7c4-51e6-4953-a84c-79d674a75178"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.500959 5122 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d95d7c4-51e6-4953-a84c-79d674a75178-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.546510 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:36 crc kubenswrapper[5122]: I0224 00:25:36.553958 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g94v7"] Feb 24 00:25:37 crc kubenswrapper[5122]: I0224 00:25:37.787910 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" path="/var/lib/kubelet/pods/0d95d7c4-51e6-4953-a84c-79d674a75178/volumes" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.128214 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531546-8lk88"] Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129324 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="extract-content" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129339 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="extract-content" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129364 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="extract-utilities" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129372 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="extract-utilities" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129405 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="registry-server" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129415 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="registry-server" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.129539 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d95d7c4-51e6-4953-a84c-79d674a75178" containerName="registry-server" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.139712 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531546-8lk88"] Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.139823 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.141950 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.142023 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.144241 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.238057 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvtf\" (UniqueName: \"kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf\") pod \"auto-csr-approver-29531546-8lk88\" (UID: \"90820f7b-8662-452d-9559-6505af1ff0f3\") " pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.339670 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvtf\" (UniqueName: \"kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf\") pod \"auto-csr-approver-29531546-8lk88\" (UID: \"90820f7b-8662-452d-9559-6505af1ff0f3\") " pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.369020 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvtf\" (UniqueName: \"kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf\") pod \"auto-csr-approver-29531546-8lk88\" (UID: \"90820f7b-8662-452d-9559-6505af1ff0f3\") " pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.457006 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:00 crc kubenswrapper[5122]: I0224 00:26:00.638914 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531546-8lk88"] Feb 24 00:26:00 crc kubenswrapper[5122]: W0224 00:26:00.643034 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90820f7b_8662_452d_9559_6505af1ff0f3.slice/crio-2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8 WatchSource:0}: Error finding container 2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8: Status 404 returned error can't find the container with id 2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8 Feb 24 00:26:01 crc kubenswrapper[5122]: I0224 00:26:01.366483 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531546-8lk88" event={"ID":"90820f7b-8662-452d-9559-6505af1ff0f3","Type":"ContainerStarted","Data":"2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8"} Feb 24 00:26:02 crc kubenswrapper[5122]: I0224 00:26:02.374969 5122 generic.go:358] "Generic (PLEG): container finished" podID="90820f7b-8662-452d-9559-6505af1ff0f3" containerID="2737a005f21ffd2fb389d9e6b12e766a8e6e8aafe89c43b6b9b7cdae15b39a3a" exitCode=0 Feb 24 00:26:02 crc kubenswrapper[5122]: I0224 00:26:02.375021 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531546-8lk88" event={"ID":"90820f7b-8662-452d-9559-6505af1ff0f3","Type":"ContainerDied","Data":"2737a005f21ffd2fb389d9e6b12e766a8e6e8aafe89c43b6b9b7cdae15b39a3a"} Feb 24 00:26:03 crc kubenswrapper[5122]: I0224 00:26:03.600534 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:03 crc kubenswrapper[5122]: I0224 00:26:03.686638 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cvtf\" (UniqueName: \"kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf\") pod \"90820f7b-8662-452d-9559-6505af1ff0f3\" (UID: \"90820f7b-8662-452d-9559-6505af1ff0f3\") " Feb 24 00:26:03 crc kubenswrapper[5122]: I0224 00:26:03.692260 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf" (OuterVolumeSpecName: "kube-api-access-7cvtf") pod "90820f7b-8662-452d-9559-6505af1ff0f3" (UID: "90820f7b-8662-452d-9559-6505af1ff0f3"). InnerVolumeSpecName "kube-api-access-7cvtf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:26:03 crc kubenswrapper[5122]: I0224 00:26:03.789470 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7cvtf\" (UniqueName: \"kubernetes.io/projected/90820f7b-8662-452d-9559-6505af1ff0f3-kube-api-access-7cvtf\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:04 crc kubenswrapper[5122]: I0224 00:26:04.395956 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531546-8lk88" Feb 24 00:26:04 crc kubenswrapper[5122]: I0224 00:26:04.395959 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531546-8lk88" event={"ID":"90820f7b-8662-452d-9559-6505af1ff0f3","Type":"ContainerDied","Data":"2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8"} Feb 24 00:26:04 crc kubenswrapper[5122]: I0224 00:26:04.396523 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd59551549f948b99ef84d42a83fb8815b0ea26665c77fd27c3ac9748461df8" Feb 24 00:26:04 crc kubenswrapper[5122]: I0224 00:26:04.656293 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531540-ppfkv"] Feb 24 00:26:04 crc kubenswrapper[5122]: I0224 00:26:04.660199 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531540-ppfkv"] Feb 24 00:26:05 crc kubenswrapper[5122]: I0224 00:26:05.790054 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63be25af-7c2b-453a-904e-98f05c102e49" path="/var/lib/kubelet/pods/63be25af-7c2b-453a-904e-98f05c102e49/volumes" Feb 24 00:26:58 crc kubenswrapper[5122]: I0224 00:26:58.763430 5122 scope.go:117] "RemoveContainer" containerID="12b72396236406a2b4f1d88f75c3ab8e7fe0ed01d048573b6f2f4bad104558db" Feb 24 00:27:27 crc kubenswrapper[5122]: I0224 00:27:27.115636 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:27:27 crc kubenswrapper[5122]: I0224 00:27:27.116195 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:27:57 crc kubenswrapper[5122]: I0224 00:27:57.115877 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:27:57 crc kubenswrapper[5122]: I0224 00:27:57.116453 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.139748 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531548-45r4c"] Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.140790 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90820f7b-8662-452d-9559-6505af1ff0f3" containerName="oc" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.140809 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="90820f7b-8662-452d-9559-6505af1ff0f3" containerName="oc" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.140992 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="90820f7b-8662-452d-9559-6505af1ff0f3" containerName="oc" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.144796 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.146691 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.147320 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.147404 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.153363 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531548-45r4c"] Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.238245 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4wv\" (UniqueName: \"kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv\") pod \"auto-csr-approver-29531548-45r4c\" (UID: \"57e6643b-db72-41da-8438-593821a4ab0b\") " pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.340379 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4wv\" (UniqueName: \"kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv\") pod \"auto-csr-approver-29531548-45r4c\" (UID: \"57e6643b-db72-41da-8438-593821a4ab0b\") " pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.361930 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4wv\" (UniqueName: \"kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv\") pod \"auto-csr-approver-29531548-45r4c\" (UID: \"57e6643b-db72-41da-8438-593821a4ab0b\") " pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.458839 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:00 crc kubenswrapper[5122]: I0224 00:28:00.665895 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531548-45r4c"] Feb 24 00:28:01 crc kubenswrapper[5122]: I0224 00:28:01.269927 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531548-45r4c" event={"ID":"57e6643b-db72-41da-8438-593821a4ab0b","Type":"ContainerStarted","Data":"027d7ba333b7b123f1eadc7f5173d92e739aab87bcef6c9474b5c296169d05b7"} Feb 24 00:28:02 crc kubenswrapper[5122]: I0224 00:28:02.277330 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531548-45r4c" event={"ID":"57e6643b-db72-41da-8438-593821a4ab0b","Type":"ContainerStarted","Data":"5c0d75325d102974dcca8748c78c4258b3569c693bdddf6fbcdb8760ec4b68a4"} Feb 24 00:28:02 crc kubenswrapper[5122]: I0224 00:28:02.294320 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29531548-45r4c" podStartSLOduration=1.154161724 podStartE2EDuration="2.294302047s" podCreationTimestamp="2026-02-24 00:28:00 +0000 UTC" firstStartedPulling="2026-02-24 00:28:00.677295612 +0000 UTC m=+1147.766750115" lastFinishedPulling="2026-02-24 00:28:01.817435925 +0000 UTC m=+1148.906890438" observedRunningTime="2026-02-24 00:28:02.292362235 +0000 UTC m=+1149.381816758" watchObservedRunningTime="2026-02-24 00:28:02.294302047 +0000 UTC m=+1149.383756570" Feb 24 00:28:03 crc kubenswrapper[5122]: I0224 00:28:03.287963 5122 generic.go:358] "Generic (PLEG): container finished" podID="57e6643b-db72-41da-8438-593821a4ab0b" containerID="5c0d75325d102974dcca8748c78c4258b3569c693bdddf6fbcdb8760ec4b68a4" exitCode=0 Feb 24 00:28:03 crc kubenswrapper[5122]: I0224 00:28:03.288373 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531548-45r4c" event={"ID":"57e6643b-db72-41da-8438-593821a4ab0b","Type":"ContainerDied","Data":"5c0d75325d102974dcca8748c78c4258b3569c693bdddf6fbcdb8760ec4b68a4"} Feb 24 00:28:04 crc kubenswrapper[5122]: I0224 00:28:04.537251 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:04 crc kubenswrapper[5122]: I0224 00:28:04.706368 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh4wv\" (UniqueName: \"kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv\") pod \"57e6643b-db72-41da-8438-593821a4ab0b\" (UID: \"57e6643b-db72-41da-8438-593821a4ab0b\") " Feb 24 00:28:04 crc kubenswrapper[5122]: I0224 00:28:04.731944 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv" (OuterVolumeSpecName: "kube-api-access-lh4wv") pod "57e6643b-db72-41da-8438-593821a4ab0b" (UID: "57e6643b-db72-41da-8438-593821a4ab0b"). InnerVolumeSpecName "kube-api-access-lh4wv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:28:04 crc kubenswrapper[5122]: I0224 00:28:04.808024 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lh4wv\" (UniqueName: \"kubernetes.io/projected/57e6643b-db72-41da-8438-593821a4ab0b-kube-api-access-lh4wv\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.305705 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531548-45r4c" Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.306154 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531548-45r4c" event={"ID":"57e6643b-db72-41da-8438-593821a4ab0b","Type":"ContainerDied","Data":"027d7ba333b7b123f1eadc7f5173d92e739aab87bcef6c9474b5c296169d05b7"} Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.306360 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027d7ba333b7b123f1eadc7f5173d92e739aab87bcef6c9474b5c296169d05b7" Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.353494 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531542-n5v8b"] Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.360190 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531542-n5v8b"] Feb 24 00:28:05 crc kubenswrapper[5122]: I0224 00:28:05.783309 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d5aff6-468e-4c2e-9115-e557c69f5947" path="/var/lib/kubelet/pods/f7d5aff6-468e-4c2e-9115-e557c69f5947/volumes" Feb 24 00:28:06 crc kubenswrapper[5122]: I0224 00:28:06.319447 5122 generic.go:358] "Generic (PLEG): container finished" podID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerID="b6462b59ee575c62ee31e91cade89fbe2ee9de1bc3372cb6fde79dfbc4590ac9" exitCode=0 Feb 24 00:28:06 crc kubenswrapper[5122]: I0224 00:28:06.319520 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerDied","Data":"b6462b59ee575c62ee31e91cade89fbe2ee9de1bc3372cb6fde79dfbc4590ac9"} Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.565015 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.646760 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.646826 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tq67\" (UniqueName: \"kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647012 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647058 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647134 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647180 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647203 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647209 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647276 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647335 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647387 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647595 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647609 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647620 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.647653 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.652336 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.652354 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.653149 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.653208 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67" (OuterVolumeSpecName: "kube-api-access-2tq67") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "kube-api-access-2tq67". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.657163 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.748749 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749229 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run\") pod \"6ead887b-19da-46b8-af4b-e091bce80ed0\" (UID: \"6ead887b-19da-46b8-af4b-e091bce80ed0\") " Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749581 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749599 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6ead887b-19da-46b8-af4b-e091bce80ed0-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749608 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749618 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tq67\" (UniqueName: \"kubernetes.io/projected/6ead887b-19da-46b8-af4b-e091bce80ed0-kube-api-access-2tq67\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749628 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749637 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749646 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/6ead887b-19da-46b8-af4b-e091bce80ed0-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.749655 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6ead887b-19da-46b8-af4b-e091bce80ed0-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.750406 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:07 crc kubenswrapper[5122]: I0224 00:28:07.850669 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:08 crc kubenswrapper[5122]: I0224 00:28:08.092205 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:08 crc kubenswrapper[5122]: I0224 00:28:08.154246 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:08 crc kubenswrapper[5122]: I0224 00:28:08.334903 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6ead887b-19da-46b8-af4b-e091bce80ed0","Type":"ContainerDied","Data":"e65c9ef09bc99d468175748a5f61b07f9b9ea9a277ecd81d08d87fd9b09d9f59"} Feb 24 00:28:08 crc kubenswrapper[5122]: I0224 00:28:08.334940 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e65c9ef09bc99d468175748a5f61b07f9b9ea9a277ecd81d08d87fd9b09d9f59" Feb 24 00:28:08 crc kubenswrapper[5122]: I0224 00:28:08.334979 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:28:10 crc kubenswrapper[5122]: I0224 00:28:10.313803 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6ead887b-19da-46b8-af4b-e091bce80ed0" (UID: "6ead887b-19da-46b8-af4b-e091bce80ed0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:10 crc kubenswrapper[5122]: I0224 00:28:10.383742 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6ead887b-19da-46b8-af4b-e091bce80ed0-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.139667 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140271 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="manage-dockerfile" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140286 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="manage-dockerfile" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140301 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57e6643b-db72-41da-8438-593821a4ab0b" containerName="oc" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140307 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e6643b-db72-41da-8438-593821a4ab0b" containerName="oc" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140320 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="git-clone" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140326 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="git-clone" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140339 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140344 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140451 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="57e6643b-db72-41da-8438-593821a4ab0b" containerName="oc" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.140461 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ead887b-19da-46b8-af4b-e091bce80ed0" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.881511 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.881787 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.885032 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.885638 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.885788 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.886158 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921057 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921360 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921470 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921575 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921659 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921734 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921806 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hdh5\" (UniqueName: \"kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921893 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.921959 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.922030 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.922127 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:12 crc kubenswrapper[5122]: I0224 00:28:12.922243 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023383 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023429 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023452 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023535 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023576 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023598 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023630 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023652 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023671 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023712 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023746 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023773 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023798 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023821 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hdh5\" (UniqueName: \"kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.023923 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.024115 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.024619 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.025321 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.025458 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.025708 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.026592 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.031539 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.035791 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.043717 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hdh5\" (UniqueName: \"kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5\") pod \"sg-bridge-1-build\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.203535 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:13 crc kubenswrapper[5122]: I0224 00:28:13.658104 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:14 crc kubenswrapper[5122]: I0224 00:28:14.389286 5122 generic.go:358] "Generic (PLEG): container finished" podID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerID="1e6712a7ef34bc07dea8397acefc63d51bbc9c09bf826befb103c8bdd7a32546" exitCode=0 Feb 24 00:28:14 crc kubenswrapper[5122]: I0224 00:28:14.389374 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a5d9a473-a533-46a7-87c1-108828a16ee0","Type":"ContainerDied","Data":"1e6712a7ef34bc07dea8397acefc63d51bbc9c09bf826befb103c8bdd7a32546"} Feb 24 00:28:14 crc kubenswrapper[5122]: I0224 00:28:14.389844 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a5d9a473-a533-46a7-87c1-108828a16ee0","Type":"ContainerStarted","Data":"485bd1f2ebbae6972137a2ac38f897742312167a7b62db08af13188df6e9b39a"} Feb 24 00:28:15 crc kubenswrapper[5122]: I0224 00:28:15.406044 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a5d9a473-a533-46a7-87c1-108828a16ee0","Type":"ContainerStarted","Data":"39fa247edf0ab1113d0ceadfeb69518c9f3e65d56d0fac3d90faa0e9d1fa2e19"} Feb 24 00:28:15 crc kubenswrapper[5122]: I0224 00:28:15.431120 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.431100483 podStartE2EDuration="3.431100483s" podCreationTimestamp="2026-02-24 00:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:28:15.426626483 +0000 UTC m=+1162.516081066" watchObservedRunningTime="2026-02-24 00:28:15.431100483 +0000 UTC m=+1162.520554996" Feb 24 00:28:22 crc kubenswrapper[5122]: I0224 00:28:22.596761 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:22 crc kubenswrapper[5122]: I0224 00:28:22.597616 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="docker-build" containerID="cri-o://39fa247edf0ab1113d0ceadfeb69518c9f3e65d56d0fac3d90faa0e9d1fa2e19" gracePeriod=30 Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.468366 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_a5d9a473-a533-46a7-87c1-108828a16ee0/docker-build/0.log" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.468885 5122 generic.go:358] "Generic (PLEG): container finished" podID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerID="39fa247edf0ab1113d0ceadfeb69518c9f3e65d56d0fac3d90faa0e9d1fa2e19" exitCode=1 Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.468979 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a5d9a473-a533-46a7-87c1-108828a16ee0","Type":"ContainerDied","Data":"39fa247edf0ab1113d0ceadfeb69518c9f3e65d56d0fac3d90faa0e9d1fa2e19"} Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.520345 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_a5d9a473-a533-46a7-87c1-108828a16ee0/docker-build/0.log" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.521029 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574800 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574876 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574907 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hdh5\" (UniqueName: \"kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574939 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574970 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.574995 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575122 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575147 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575206 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575266 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575286 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.575321 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir\") pod \"a5d9a473-a533-46a7-87c1-108828a16ee0\" (UID: \"a5d9a473-a533-46a7-87c1-108828a16ee0\") " Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.576342 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.577590 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.577696 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.577694 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.578381 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.579099 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.580733 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.583719 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.585517 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5" (OuterVolumeSpecName: "kube-api-access-6hdh5") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "kube-api-access-6hdh5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.587630 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.630650 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676841 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676873 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hdh5\" (UniqueName: \"kubernetes.io/projected/a5d9a473-a533-46a7-87c1-108828a16ee0-kube-api-access-6hdh5\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676882 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a5d9a473-a533-46a7-87c1-108828a16ee0-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676891 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676900 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676908 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676916 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676924 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676931 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d9a473-a533-46a7-87c1-108828a16ee0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676941 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:23 crc kubenswrapper[5122]: I0224 00:28:23.676950 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5d9a473-a533-46a7-87c1-108828a16ee0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.112344 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a5d9a473-a533-46a7-87c1-108828a16ee0" (UID: "a5d9a473-a533-46a7-87c1-108828a16ee0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.187272 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5d9a473-a533-46a7-87c1-108828a16ee0-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.228863 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.229724 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="docker-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.229753 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="docker-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.229786 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="manage-dockerfile" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.229797 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="manage-dockerfile" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.229973 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" containerName="docker-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.315681 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.315832 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.318085 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.318387 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.319688 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.390275 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.390634 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.390860 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.391019 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.391269 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.391455 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.391721 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.391933 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.392105 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.392308 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.392473 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.392673 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5755m\" (UniqueName: \"kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.483258 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_a5d9a473-a533-46a7-87c1-108828a16ee0/docker-build/0.log" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.483983 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a5d9a473-a533-46a7-87c1-108828a16ee0","Type":"ContainerDied","Data":"485bd1f2ebbae6972137a2ac38f897742312167a7b62db08af13188df6e9b39a"} Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.484069 5122 scope.go:117] "RemoveContainer" containerID="39fa247edf0ab1113d0ceadfeb69518c9f3e65d56d0fac3d90faa0e9d1fa2e19" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.484086 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.494343 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.494774 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.494853 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.494897 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.494965 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495084 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495188 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495268 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495318 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495358 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5755m\" (UniqueName: \"kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495418 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495491 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.495992 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.496655 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.497697 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.497927 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.498237 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.499053 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.499966 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.500438 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.500691 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.505432 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.519517 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.521894 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.535578 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5755m\" (UniqueName: \"kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m\") pod \"sg-bridge-2-build\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.539601 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.554358 5122 scope.go:117] "RemoveContainer" containerID="1e6712a7ef34bc07dea8397acefc63d51bbc9c09bf826befb103c8bdd7a32546" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.632785 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:28:24 crc kubenswrapper[5122]: I0224 00:28:24.831602 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:28:25 crc kubenswrapper[5122]: I0224 00:28:25.495153 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerStarted","Data":"316dbf0985a01634f352afab13a6a33df3892186556ccc52777e6cbf61b60270"} Feb 24 00:28:25 crc kubenswrapper[5122]: I0224 00:28:25.495367 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerStarted","Data":"ff9841a13c06bd7cb8dbe0ff1a4fde8795bf5bfadfc3d710ede0e534e04cc0e0"} Feb 24 00:28:25 crc kubenswrapper[5122]: I0224 00:28:25.781634 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d9a473-a533-46a7-87c1-108828a16ee0" path="/var/lib/kubelet/pods/a5d9a473-a533-46a7-87c1-108828a16ee0/volumes" Feb 24 00:28:26 crc kubenswrapper[5122]: I0224 00:28:26.505783 5122 generic.go:358] "Generic (PLEG): container finished" podID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerID="316dbf0985a01634f352afab13a6a33df3892186556ccc52777e6cbf61b60270" exitCode=0 Feb 24 00:28:26 crc kubenswrapper[5122]: I0224 00:28:26.505851 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerDied","Data":"316dbf0985a01634f352afab13a6a33df3892186556ccc52777e6cbf61b60270"} Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.115782 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.115846 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.115894 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.116539 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.116597 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434" gracePeriod=600 Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.513221 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434" exitCode=0 Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.513266 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434"} Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.513741 5122 scope.go:117] "RemoveContainer" containerID="2f5785bae16fc9d24757a682e5abe8ff71c9fc3ab688be3d82b7e331ef553c3b" Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.516303 5122 generic.go:358] "Generic (PLEG): container finished" podID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerID="00114592c9147077b612d6138ffdd8c7d8545bc4d46a1f28cf0879a30adfebcf" exitCode=0 Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.516434 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerDied","Data":"00114592c9147077b612d6138ffdd8c7d8545bc4d46a1f28cf0879a30adfebcf"} Feb 24 00:28:27 crc kubenswrapper[5122]: I0224 00:28:27.574967 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_16c78ea0-c944-4ee3-9876-fde990a955fd/manage-dockerfile/0.log" Feb 24 00:28:28 crc kubenswrapper[5122]: I0224 00:28:28.526470 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86"} Feb 24 00:28:28 crc kubenswrapper[5122]: I0224 00:28:28.529214 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerStarted","Data":"ef38b992d5d459bf0c40ae5ad3ea7d904ef3fe002b7d3a44a334325323e857ee"} Feb 24 00:28:54 crc kubenswrapper[5122]: I0224 00:28:54.312566 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:28:54 crc kubenswrapper[5122]: I0224 00:28:54.312580 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:28:54 crc kubenswrapper[5122]: I0224 00:28:54.325609 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:28:54 crc kubenswrapper[5122]: I0224 00:28:54.325850 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:28:58 crc kubenswrapper[5122]: I0224 00:28:58.902699 5122 scope.go:117] "RemoveContainer" containerID="17e83be981f50909a50c6164c94d957bb7e211e9405694005686092f793b58a9" Feb 24 00:29:14 crc kubenswrapper[5122]: I0224 00:29:14.859007 5122 generic.go:358] "Generic (PLEG): container finished" podID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerID="ef38b992d5d459bf0c40ae5ad3ea7d904ef3fe002b7d3a44a334325323e857ee" exitCode=0 Feb 24 00:29:14 crc kubenswrapper[5122]: I0224 00:29:14.859109 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerDied","Data":"ef38b992d5d459bf0c40ae5ad3ea7d904ef3fe002b7d3a44a334325323e857ee"} Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.208247 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247661 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247741 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247800 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247863 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247949 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.247984 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248013 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248176 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248201 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248231 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248251 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5755m\" (UniqueName: \"kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248336 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles\") pod \"16c78ea0-c944-4ee3-9876-fde990a955fd\" (UID: \"16c78ea0-c944-4ee3-9876-fde990a955fd\") " Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248536 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248562 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248739 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.248764 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16c78ea0-c944-4ee3-9876-fde990a955fd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.249480 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.250207 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.250679 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.250955 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.251967 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.255978 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.256054 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m" (OuterVolumeSpecName: "kube-api-access-5755m") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "kube-api-access-5755m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.256856 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349825 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349865 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349873 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349882 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349892 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349900 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5755m\" (UniqueName: \"kubernetes.io/projected/16c78ea0-c944-4ee3-9876-fde990a955fd-kube-api-access-5755m\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349908 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c78ea0-c944-4ee3-9876-fde990a955fd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.349920 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/16c78ea0-c944-4ee3-9876-fde990a955fd-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.365833 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.450810 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.875468 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"16c78ea0-c944-4ee3-9876-fde990a955fd","Type":"ContainerDied","Data":"ff9841a13c06bd7cb8dbe0ff1a4fde8795bf5bfadfc3d710ede0e534e04cc0e0"} Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.875814 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9841a13c06bd7cb8dbe0ff1a4fde8795bf5bfadfc3d710ede0e534e04cc0e0" Feb 24 00:29:16 crc kubenswrapper[5122]: I0224 00:29:16.875628 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:29:17 crc kubenswrapper[5122]: I0224 00:29:17.015677 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "16c78ea0-c944-4ee3-9876-fde990a955fd" (UID: "16c78ea0-c944-4ee3-9876-fde990a955fd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:17 crc kubenswrapper[5122]: I0224 00:29:17.060818 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/16c78ea0-c944-4ee3-9876-fde990a955fd-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.203006 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204612 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="manage-dockerfile" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204662 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="manage-dockerfile" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204707 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="docker-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204730 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="docker-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204770 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="git-clone" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.204788 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="git-clone" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.205045 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="16c78ea0-c944-4ee3-9876-fde990a955fd" containerName="docker-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.393744 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.393917 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.397331 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.397339 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.397878 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.398208 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.511865 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2j4x\" (UniqueName: \"kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.511990 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512024 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512064 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512182 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512219 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512263 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512298 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512424 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512524 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512598 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.512661 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.613702 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.613777 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.613864 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2j4x\" (UniqueName: \"kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614156 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614214 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614594 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614732 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614790 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.614980 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615033 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615436 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615513 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615794 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615543 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615872 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615895 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615954 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.615995 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.616283 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.616756 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.616811 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.623129 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.627822 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.634282 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2j4x\" (UniqueName: \"kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:20 crc kubenswrapper[5122]: I0224 00:29:20.713490 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:21 crc kubenswrapper[5122]: I0224 00:29:21.183890 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:21 crc kubenswrapper[5122]: I0224 00:29:21.918046 5122 generic.go:358] "Generic (PLEG): container finished" podID="a9b27119-090c-4b25-9d73-a613692c8d02" containerID="93058384da68dadee219183598094748b228d14cabf97da857b29e072a0a7071" exitCode=0 Feb 24 00:29:21 crc kubenswrapper[5122]: I0224 00:29:21.918148 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a9b27119-090c-4b25-9d73-a613692c8d02","Type":"ContainerDied","Data":"93058384da68dadee219183598094748b228d14cabf97da857b29e072a0a7071"} Feb 24 00:29:21 crc kubenswrapper[5122]: I0224 00:29:21.918449 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a9b27119-090c-4b25-9d73-a613692c8d02","Type":"ContainerStarted","Data":"6819cbbb21b8ad7d9df388f51c86579ff933e1cb2964a1f129a0d92f07f2c845"} Feb 24 00:29:22 crc kubenswrapper[5122]: I0224 00:29:22.932123 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a9b27119-090c-4b25-9d73-a613692c8d02","Type":"ContainerStarted","Data":"4386ea4bde773dad9d74f870dee0d755ae883a9472224a1ad85c8ff9e2a3fc13"} Feb 24 00:29:22 crc kubenswrapper[5122]: I0224 00:29:22.967659 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=2.967561324 podStartE2EDuration="2.967561324s" podCreationTimestamp="2026-02-24 00:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:29:22.959485049 +0000 UTC m=+1230.048939582" watchObservedRunningTime="2026-02-24 00:29:22.967561324 +0000 UTC m=+1230.057015867" Feb 24 00:29:30 crc kubenswrapper[5122]: I0224 00:29:30.855610 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:30 crc kubenswrapper[5122]: I0224 00:29:30.856568 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="docker-build" containerID="cri-o://4386ea4bde773dad9d74f870dee0d755ae883a9472224a1ad85c8ff9e2a3fc13" gracePeriod=30 Feb 24 00:29:30 crc kubenswrapper[5122]: I0224 00:29:30.990740 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a9b27119-090c-4b25-9d73-a613692c8d02/docker-build/0.log" Feb 24 00:29:30 crc kubenswrapper[5122]: I0224 00:29:30.991957 5122 generic.go:358] "Generic (PLEG): container finished" podID="a9b27119-090c-4b25-9d73-a613692c8d02" containerID="4386ea4bde773dad9d74f870dee0d755ae883a9472224a1ad85c8ff9e2a3fc13" exitCode=1 Feb 24 00:29:30 crc kubenswrapper[5122]: I0224 00:29:30.992051 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a9b27119-090c-4b25-9d73-a613692c8d02","Type":"ContainerDied","Data":"4386ea4bde773dad9d74f870dee0d755ae883a9472224a1ad85c8ff9e2a3fc13"} Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.245566 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a9b27119-090c-4b25-9d73-a613692c8d02/docker-build/0.log" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.245976 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.374552 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.374666 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.374770 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.374913 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.375554 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.375871 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.376007 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378367 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378439 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2j4x\" (UniqueName: \"kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378506 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378547 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378591 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378623 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378677 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378790 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull\") pod \"a9b27119-090c-4b25-9d73-a613692c8d02\" (UID: \"a9b27119-090c-4b25-9d73-a613692c8d02\") " Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.378802 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.379297 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.379799 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.379870 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380305 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380331 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380347 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380360 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380372 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380384 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b27119-090c-4b25-9d73-a613692c8d02-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.380397 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a9b27119-090c-4b25-9d73-a613692c8d02-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.387737 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.387958 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.389735 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x" (OuterVolumeSpecName: "kube-api-access-d2j4x") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "kube-api-access-d2j4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.472167 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.481451 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.481494 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d2j4x\" (UniqueName: \"kubernetes.io/projected/a9b27119-090c-4b25-9d73-a613692c8d02-kube-api-access-d2j4x\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.481506 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.481515 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/a9b27119-090c-4b25-9d73-a613692c8d02-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.808296 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a9b27119-090c-4b25-9d73-a613692c8d02" (UID: "a9b27119-090c-4b25-9d73-a613692c8d02"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:29:31 crc kubenswrapper[5122]: I0224 00:29:31.886252 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a9b27119-090c-4b25-9d73-a613692c8d02-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.003260 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a9b27119-090c-4b25-9d73-a613692c8d02/docker-build/0.log" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.003811 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a9b27119-090c-4b25-9d73-a613692c8d02","Type":"ContainerDied","Data":"6819cbbb21b8ad7d9df388f51c86579ff933e1cb2964a1f129a0d92f07f2c845"} Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.003892 5122 scope.go:117] "RemoveContainer" containerID="4386ea4bde773dad9d74f870dee0d755ae883a9472224a1ad85c8ff9e2a3fc13" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.003927 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.050163 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.054388 5122 scope.go:117] "RemoveContainer" containerID="93058384da68dadee219183598094748b228d14cabf97da857b29e072a0a7071" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.064426 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.385444 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.386482 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="docker-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.386513 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="docker-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.386533 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="manage-dockerfile" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.386541 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="manage-dockerfile" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.386723 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" containerName="docker-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.391412 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.393395 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.393745 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.393950 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.394046 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-28rxw\"" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.403894 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.492862 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.492922 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493019 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdls8\" (UniqueName: \"kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493058 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493145 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493251 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493303 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493391 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493437 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493459 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493509 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.493545 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.594931 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.595032 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.595922 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596079 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596246 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596351 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596485 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596525 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.596559 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.597126 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.597248 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.597739 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.597939 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.597977 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598046 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598118 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598184 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdls8\" (UniqueName: \"kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598233 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598336 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598549 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.598628 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.603261 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.604041 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.629009 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdls8\" (UniqueName: \"kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:32 crc kubenswrapper[5122]: I0224 00:29:32.713201 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:33 crc kubenswrapper[5122]: I0224 00:29:33.008502 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:29:33 crc kubenswrapper[5122]: W0224 00:29:33.017565 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1db36ef1_2dcc_4035_acc6_db0219de30cf.slice/crio-cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a WatchSource:0}: Error finding container cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a: Status 404 returned error can't find the container with id cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a Feb 24 00:29:33 crc kubenswrapper[5122]: I0224 00:29:33.787876 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b27119-090c-4b25-9d73-a613692c8d02" path="/var/lib/kubelet/pods/a9b27119-090c-4b25-9d73-a613692c8d02/volumes" Feb 24 00:29:34 crc kubenswrapper[5122]: I0224 00:29:34.023625 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerStarted","Data":"82310429a935935448e90bbd1748722f98163c1b9a95ff30dd163badae816358"} Feb 24 00:29:34 crc kubenswrapper[5122]: I0224 00:29:34.023705 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerStarted","Data":"cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a"} Feb 24 00:29:35 crc kubenswrapper[5122]: I0224 00:29:35.032379 5122 generic.go:358] "Generic (PLEG): container finished" podID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerID="82310429a935935448e90bbd1748722f98163c1b9a95ff30dd163badae816358" exitCode=0 Feb 24 00:29:35 crc kubenswrapper[5122]: I0224 00:29:35.032462 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerDied","Data":"82310429a935935448e90bbd1748722f98163c1b9a95ff30dd163badae816358"} Feb 24 00:29:35 crc kubenswrapper[5122]: I0224 00:29:35.034428 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:29:36 crc kubenswrapper[5122]: I0224 00:29:36.045191 5122 generic.go:358] "Generic (PLEG): container finished" podID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerID="9068c43f80d5fca93bc65634aabafb2c2562de25958efdd3853354f974317d97" exitCode=0 Feb 24 00:29:36 crc kubenswrapper[5122]: I0224 00:29:36.045349 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerDied","Data":"9068c43f80d5fca93bc65634aabafb2c2562de25958efdd3853354f974317d97"} Feb 24 00:29:36 crc kubenswrapper[5122]: I0224 00:29:36.081557 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_1db36ef1-2dcc-4035-acc6-db0219de30cf/manage-dockerfile/0.log" Feb 24 00:29:37 crc kubenswrapper[5122]: I0224 00:29:37.056803 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerStarted","Data":"927e27611c9f52211e3bfd28e326070be1a2077bae821313e126912366d86095"} Feb 24 00:29:37 crc kubenswrapper[5122]: I0224 00:29:37.085469 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.085446706 podStartE2EDuration="5.085446706s" podCreationTimestamp="2026-02-24 00:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:29:37.083784642 +0000 UTC m=+1244.173239175" watchObservedRunningTime="2026-02-24 00:29:37.085446706 +0000 UTC m=+1244.174901229" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.157925 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531550-htvhc"] Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.165496 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv"] Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.170172 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531550-htvhc"] Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.170434 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.165684 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.173289 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.173450 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.173541 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.173805 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.173975 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.176153 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv"] Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.289715 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56p5\" (UniqueName: \"kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.290014 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.290089 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.290149 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55xp\" (UniqueName: \"kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp\") pod \"auto-csr-approver-29531550-htvhc\" (UID: \"810a6cb2-a1cd-4818-b267-e542e52ba120\") " pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.391295 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.391375 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.391435 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q55xp\" (UniqueName: \"kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp\") pod \"auto-csr-approver-29531550-htvhc\" (UID: \"810a6cb2-a1cd-4818-b267-e542e52ba120\") " pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.391474 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s56p5\" (UniqueName: \"kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.392349 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.397235 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.410178 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56p5\" (UniqueName: \"kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5\") pod \"collect-profiles-29531550-4psqv\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.416231 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55xp\" (UniqueName: \"kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp\") pod \"auto-csr-approver-29531550-htvhc\" (UID: \"810a6cb2-a1cd-4818-b267-e542e52ba120\") " pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.488775 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.495284 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.732790 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531550-htvhc"] Feb 24 00:30:00 crc kubenswrapper[5122]: W0224 00:30:00.742473 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810a6cb2_a1cd_4818_b267_e542e52ba120.slice/crio-e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61 WatchSource:0}: Error finding container e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61: Status 404 returned error can't find the container with id e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61 Feb 24 00:30:00 crc kubenswrapper[5122]: I0224 00:30:00.919949 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv"] Feb 24 00:30:00 crc kubenswrapper[5122]: W0224 00:30:00.927476 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod887ebb46_1850_44ce_bf5f_029392697c25.slice/crio-d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23 WatchSource:0}: Error finding container d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23: Status 404 returned error can't find the container with id d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23 Feb 24 00:30:01 crc kubenswrapper[5122]: I0224 00:30:01.228162 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" event={"ID":"887ebb46-1850-44ce-bf5f-029392697c25","Type":"ContainerStarted","Data":"d74a1ae8dfb42530589976855a39d75eba4094de6c805bcc3f01316fa60ee6df"} Feb 24 00:30:01 crc kubenswrapper[5122]: I0224 00:30:01.228224 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" event={"ID":"887ebb46-1850-44ce-bf5f-029392697c25","Type":"ContainerStarted","Data":"d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23"} Feb 24 00:30:01 crc kubenswrapper[5122]: I0224 00:30:01.234632 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531550-htvhc" event={"ID":"810a6cb2-a1cd-4818-b267-e542e52ba120","Type":"ContainerStarted","Data":"e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61"} Feb 24 00:30:01 crc kubenswrapper[5122]: I0224 00:30:01.257605 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" podStartSLOduration=1.2575870949999999 podStartE2EDuration="1.257587095s" podCreationTimestamp="2026-02-24 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:30:01.251564645 +0000 UTC m=+1268.341019168" watchObservedRunningTime="2026-02-24 00:30:01.257587095 +0000 UTC m=+1268.347041608" Feb 24 00:30:02 crc kubenswrapper[5122]: I0224 00:30:02.243367 5122 generic.go:358] "Generic (PLEG): container finished" podID="887ebb46-1850-44ce-bf5f-029392697c25" containerID="d74a1ae8dfb42530589976855a39d75eba4094de6c805bcc3f01316fa60ee6df" exitCode=0 Feb 24 00:30:02 crc kubenswrapper[5122]: I0224 00:30:02.243452 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" event={"ID":"887ebb46-1850-44ce-bf5f-029392697c25","Type":"ContainerDied","Data":"d74a1ae8dfb42530589976855a39d75eba4094de6c805bcc3f01316fa60ee6df"} Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.250482 5122 generic.go:358] "Generic (PLEG): container finished" podID="810a6cb2-a1cd-4818-b267-e542e52ba120" containerID="b516ebc29295e4bd4af59901c52c95ecc2ba8e7ee0f6200c2782dc670797a8fe" exitCode=0 Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.250533 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531550-htvhc" event={"ID":"810a6cb2-a1cd-4818-b267-e542e52ba120","Type":"ContainerDied","Data":"b516ebc29295e4bd4af59901c52c95ecc2ba8e7ee0f6200c2782dc670797a8fe"} Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.471085 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.533128 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume\") pod \"887ebb46-1850-44ce-bf5f-029392697c25\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.533284 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume\") pod \"887ebb46-1850-44ce-bf5f-029392697c25\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.533386 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s56p5\" (UniqueName: \"kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5\") pod \"887ebb46-1850-44ce-bf5f-029392697c25\" (UID: \"887ebb46-1850-44ce-bf5f-029392697c25\") " Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.533887 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume" (OuterVolumeSpecName: "config-volume") pod "887ebb46-1850-44ce-bf5f-029392697c25" (UID: "887ebb46-1850-44ce-bf5f-029392697c25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.551164 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5" (OuterVolumeSpecName: "kube-api-access-s56p5") pod "887ebb46-1850-44ce-bf5f-029392697c25" (UID: "887ebb46-1850-44ce-bf5f-029392697c25"). InnerVolumeSpecName "kube-api-access-s56p5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.556276 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "887ebb46-1850-44ce-bf5f-029392697c25" (UID: "887ebb46-1850-44ce-bf5f-029392697c25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.634917 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s56p5\" (UniqueName: \"kubernetes.io/projected/887ebb46-1850-44ce-bf5f-029392697c25-kube-api-access-s56p5\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.634958 5122 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887ebb46-1850-44ce-bf5f-029392697c25-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:03 crc kubenswrapper[5122]: I0224 00:30:03.634967 5122 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/887ebb46-1850-44ce-bf5f-029392697c25-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.259939 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.259932 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-4psqv" event={"ID":"887ebb46-1850-44ce-bf5f-029392697c25","Type":"ContainerDied","Data":"d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23"} Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.260119 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7be83f2a9a37b5bb7f33bb4cb24ca1bf0a071a4da098212ac0fae922beb5e23" Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.535763 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.650944 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q55xp\" (UniqueName: \"kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp\") pod \"810a6cb2-a1cd-4818-b267-e542e52ba120\" (UID: \"810a6cb2-a1cd-4818-b267-e542e52ba120\") " Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.656733 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp" (OuterVolumeSpecName: "kube-api-access-q55xp") pod "810a6cb2-a1cd-4818-b267-e542e52ba120" (UID: "810a6cb2-a1cd-4818-b267-e542e52ba120"). InnerVolumeSpecName "kube-api-access-q55xp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:30:04 crc kubenswrapper[5122]: I0224 00:30:04.752622 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q55xp\" (UniqueName: \"kubernetes.io/projected/810a6cb2-a1cd-4818-b267-e542e52ba120-kube-api-access-q55xp\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.270441 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531550-htvhc" Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.270480 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531550-htvhc" event={"ID":"810a6cb2-a1cd-4818-b267-e542e52ba120","Type":"ContainerDied","Data":"e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61"} Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.270520 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99ef0ffdf3be9c88b8f6d42b3af76933e9ce80e1e8d6adcca46cf5311974d61" Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.620940 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531544-dqz2m"] Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.630184 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531544-dqz2m"] Feb 24 00:30:05 crc kubenswrapper[5122]: I0224 00:30:05.786702 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd48b1e9-6fd3-4efc-9af6-99b06c47715b" path="/var/lib/kubelet/pods/dd48b1e9-6fd3-4efc-9af6-99b06c47715b/volumes" Feb 24 00:30:27 crc kubenswrapper[5122]: I0224 00:30:27.116380 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:30:27 crc kubenswrapper[5122]: I0224 00:30:27.116954 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:30:27 crc kubenswrapper[5122]: I0224 00:30:27.481853 5122 generic.go:358] "Generic (PLEG): container finished" podID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerID="927e27611c9f52211e3bfd28e326070be1a2077bae821313e126912366d86095" exitCode=0 Feb 24 00:30:27 crc kubenswrapper[5122]: I0224 00:30:27.481963 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerDied","Data":"927e27611c9f52211e3bfd28e326070be1a2077bae821313e126912366d86095"} Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.736547 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807108 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807153 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807200 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807273 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdls8\" (UniqueName: \"kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807292 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807316 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807374 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807426 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807477 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.807923 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808185 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808285 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808326 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets\") pod \"1db36ef1-2dcc-4035-acc6-db0219de30cf\" (UID: \"1db36ef1-2dcc-4035-acc6-db0219de30cf\") " Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808502 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808562 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808647 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808833 5122 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808849 5122 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808859 5122 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808870 5122 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db36ef1-2dcc-4035-acc6-db0219de30cf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.808924 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.810764 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.812379 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.814263 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8" (OuterVolumeSpecName: "kube-api-access-fdls8") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "kube-api-access-fdls8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.814721 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull" (OuterVolumeSpecName: "builder-dockercfg-28rxw-pull") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "builder-dockercfg-28rxw-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.814904 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push" (OuterVolumeSpecName: "builder-dockercfg-28rxw-push") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "builder-dockercfg-28rxw-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910414 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdls8\" (UniqueName: \"kubernetes.io/projected/1db36ef1-2dcc-4035-acc6-db0219de30cf-kube-api-access-fdls8\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910454 5122 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910468 5122 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910482 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-push\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910496 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.910508 5122 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-28rxw-pull\" (UniqueName: \"kubernetes.io/secret/1db36ef1-2dcc-4035-acc6-db0219de30cf-builder-dockercfg-28rxw-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:28 crc kubenswrapper[5122]: I0224 00:30:28.931933 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:30:29 crc kubenswrapper[5122]: I0224 00:30:29.011419 5122 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:29 crc kubenswrapper[5122]: I0224 00:30:29.501532 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:30:29 crc kubenswrapper[5122]: I0224 00:30:29.501528 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1db36ef1-2dcc-4035-acc6-db0219de30cf","Type":"ContainerDied","Data":"cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a"} Feb 24 00:30:29 crc kubenswrapper[5122]: I0224 00:30:29.501747 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb63196ccfa2ec4a15f775bc3dc15eac57f5d6f7a465955640ef015d2a34d27a" Feb 24 00:30:30 crc kubenswrapper[5122]: I0224 00:30:30.084734 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1db36ef1-2dcc-4035-acc6-db0219de30cf" (UID: "1db36ef1-2dcc-4035-acc6-db0219de30cf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:30:30 crc kubenswrapper[5122]: I0224 00:30:30.130209 5122 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1db36ef1-2dcc-4035-acc6-db0219de30cf-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.288142 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp"] Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289390 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="git-clone" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289408 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="git-clone" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289437 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="docker-build" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289446 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="docker-build" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289460 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="manage-dockerfile" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289469 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="manage-dockerfile" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289506 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="810a6cb2-a1cd-4818-b267-e542e52ba120" containerName="oc" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289515 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="810a6cb2-a1cd-4818-b267-e542e52ba120" containerName="oc" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289536 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="887ebb46-1850-44ce-bf5f-029392697c25" containerName="collect-profiles" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289544 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="887ebb46-1850-44ce-bf5f-029392697c25" containerName="collect-profiles" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289668 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="810a6cb2-a1cd-4818-b267-e542e52ba120" containerName="oc" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289682 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="887ebb46-1850-44ce-bf5f-029392697c25" containerName="collect-profiles" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.289694 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="1db36ef1-2dcc-4035-acc6-db0219de30cf" containerName="docker-build" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.293249 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.295445 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-dkrkh\"" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.299891 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp"] Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.395205 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c74d\" (UniqueName: \"kubernetes.io/projected/b8c9fe6c-982a-4162-aa02-7f783626920f-kube-api-access-4c74d\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.395814 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8c9fe6c-982a-4162-aa02-7f783626920f-runner\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.497637 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4c74d\" (UniqueName: \"kubernetes.io/projected/b8c9fe6c-982a-4162-aa02-7f783626920f-kube-api-access-4c74d\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.497822 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8c9fe6c-982a-4162-aa02-7f783626920f-runner\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.498743 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8c9fe6c-982a-4162-aa02-7f783626920f-runner\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.523585 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c74d\" (UniqueName: \"kubernetes.io/projected/b8c9fe6c-982a-4162-aa02-7f783626920f-kube-api-access-4c74d\") pod \"smart-gateway-operator-544bbc9ddd-bx4hp\" (UID: \"b8c9fe6c-982a-4162-aa02-7f783626920f\") " pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:34 crc kubenswrapper[5122]: I0224 00:30:34.652223 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" Feb 24 00:30:35 crc kubenswrapper[5122]: I0224 00:30:35.074858 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp"] Feb 24 00:30:35 crc kubenswrapper[5122]: I0224 00:30:35.560361 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" event={"ID":"b8c9fe6c-982a-4162-aa02-7f783626920f","Type":"ContainerStarted","Data":"83f00b5e47643f692ae266b8b212662b793554844c1346a23a640b574d278e0b"} Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.556143 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-8b8bc878d-z4qks"] Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.573180 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-8b8bc878d-z4qks"] Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.573302 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.575358 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-wmvzc\"" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.749924 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-runner\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.750004 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2np\" (UniqueName: \"kubernetes.io/projected/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-kube-api-access-vp2np\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.850759 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-runner\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.850822 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2np\" (UniqueName: \"kubernetes.io/projected/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-kube-api-access-vp2np\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.851454 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-runner\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.883568 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2np\" (UniqueName: \"kubernetes.io/projected/78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e-kube-api-access-vp2np\") pod \"service-telemetry-operator-8b8bc878d-z4qks\" (UID: \"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e\") " pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:37 crc kubenswrapper[5122]: I0224 00:30:37.892664 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" Feb 24 00:30:46 crc kubenswrapper[5122]: I0224 00:30:46.706041 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-8b8bc878d-z4qks"] Feb 24 00:30:50 crc kubenswrapper[5122]: W0224 00:30:50.374358 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78992448_2ff2_4e0d_a5e9_4c8a2dd6ab0e.slice/crio-cd7242979372adeaa22a21d874018de28c2d41e45508f1bae390fff5f730cacd WatchSource:0}: Error finding container cd7242979372adeaa22a21d874018de28c2d41e45508f1bae390fff5f730cacd: Status 404 returned error can't find the container with id cd7242979372adeaa22a21d874018de28c2d41e45508f1bae390fff5f730cacd Feb 24 00:30:50 crc kubenswrapper[5122]: I0224 00:30:50.705476 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" event={"ID":"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e","Type":"ContainerStarted","Data":"cd7242979372adeaa22a21d874018de28c2d41e45508f1bae390fff5f730cacd"} Feb 24 00:30:51 crc kubenswrapper[5122]: I0224 00:30:51.720265 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" event={"ID":"b8c9fe6c-982a-4162-aa02-7f783626920f","Type":"ContainerStarted","Data":"b9997b49511a53cecbf9995e0978e84107b2f6389b45d81282b9b3ca16747ee2"} Feb 24 00:30:51 crc kubenswrapper[5122]: I0224 00:30:51.740758 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-544bbc9ddd-bx4hp" podStartSLOduration=1.978599953 podStartE2EDuration="17.74074113s" podCreationTimestamp="2026-02-24 00:30:34 +0000 UTC" firstStartedPulling="2026-02-24 00:30:35.09077676 +0000 UTC m=+1302.180231273" lastFinishedPulling="2026-02-24 00:30:50.852917937 +0000 UTC m=+1317.942372450" observedRunningTime="2026-02-24 00:30:51.737866514 +0000 UTC m=+1318.827321027" watchObservedRunningTime="2026-02-24 00:30:51.74074113 +0000 UTC m=+1318.830195653" Feb 24 00:30:56 crc kubenswrapper[5122]: I0224 00:30:56.753328 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" event={"ID":"78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e","Type":"ContainerStarted","Data":"58cf66b9fc9941b46e584bdb5049710b30c57e2a3e791dd60ef044db68c985fc"} Feb 24 00:30:56 crc kubenswrapper[5122]: I0224 00:30:56.778183 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-8b8bc878d-z4qks" podStartSLOduration=13.718095731 podStartE2EDuration="19.778167061s" podCreationTimestamp="2026-02-24 00:30:37 +0000 UTC" firstStartedPulling="2026-02-24 00:30:50.382017737 +0000 UTC m=+1317.471472250" lastFinishedPulling="2026-02-24 00:30:56.442089047 +0000 UTC m=+1323.531543580" observedRunningTime="2026-02-24 00:30:56.776884057 +0000 UTC m=+1323.866338580" watchObservedRunningTime="2026-02-24 00:30:56.778167061 +0000 UTC m=+1323.867621604" Feb 24 00:30:57 crc kubenswrapper[5122]: I0224 00:30:57.115263 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:30:57 crc kubenswrapper[5122]: I0224 00:30:57.115333 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:30:59 crc kubenswrapper[5122]: I0224 00:30:59.080475 5122 scope.go:117] "RemoveContainer" containerID="793f2270d6ccd6dd8f42f9e09f125ada03f4189dc85052e06ab4820daf36011c" Feb 24 00:31:17 crc kubenswrapper[5122]: I0224 00:31:17.847506 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.768144 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.774006 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.774158 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.774253 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-6qwl7\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.774432 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.774889 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.775041 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.775186 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.786219 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815115 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815197 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815323 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815642 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815745 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ftj\" (UniqueName: \"kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815831 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.815879 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.916593 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ftj\" (UniqueName: \"kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.916649 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.916668 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.916715 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.916947 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.917120 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.917560 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.917787 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.925236 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.927613 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.927999 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.929546 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.940627 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:20 crc kubenswrapper[5122]: I0224 00:31:20.941379 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ftj\" (UniqueName: \"kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj\") pod \"default-interconnect-55bf8d5cb-6pksx\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:21 crc kubenswrapper[5122]: I0224 00:31:21.085427 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:31:21 crc kubenswrapper[5122]: I0224 00:31:21.301557 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:31:21 crc kubenswrapper[5122]: W0224 00:31:21.310114 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6050c59_cb83_4be0_9710_f1739d8f457f.slice/crio-fb9b37514dc71503edcd2270cca7f6e9ebbe9ec66207341e725ac03b828b37f5 WatchSource:0}: Error finding container fb9b37514dc71503edcd2270cca7f6e9ebbe9ec66207341e725ac03b828b37f5: Status 404 returned error can't find the container with id fb9b37514dc71503edcd2270cca7f6e9ebbe9ec66207341e725ac03b828b37f5 Feb 24 00:31:21 crc kubenswrapper[5122]: I0224 00:31:21.966397 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" event={"ID":"d6050c59-cb83-4be0-9710-f1739d8f457f","Type":"ContainerStarted","Data":"fb9b37514dc71503edcd2270cca7f6e9ebbe9ec66207341e725ac03b828b37f5"} Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.008728 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" event={"ID":"d6050c59-cb83-4be0-9710-f1739d8f457f","Type":"ContainerStarted","Data":"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3"} Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.037751 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" podStartSLOduration=5.341215165 podStartE2EDuration="10.037729865s" podCreationTimestamp="2026-02-24 00:31:17 +0000 UTC" firstStartedPulling="2026-02-24 00:31:21.313144712 +0000 UTC m=+1348.402599225" lastFinishedPulling="2026-02-24 00:31:26.009659412 +0000 UTC m=+1353.099113925" observedRunningTime="2026-02-24 00:31:27.02838964 +0000 UTC m=+1354.117844183" watchObservedRunningTime="2026-02-24 00:31:27.037729865 +0000 UTC m=+1354.127184388" Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.115741 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.115850 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.115898 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.116803 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:31:27 crc kubenswrapper[5122]: I0224 00:31:27.116883 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86" gracePeriod=600 Feb 24 00:31:28 crc kubenswrapper[5122]: I0224 00:31:28.020268 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86" exitCode=0 Feb 24 00:31:28 crc kubenswrapper[5122]: I0224 00:31:28.020377 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86"} Feb 24 00:31:28 crc kubenswrapper[5122]: I0224 00:31:28.020454 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerStarted","Data":"13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3"} Feb 24 00:31:28 crc kubenswrapper[5122]: I0224 00:31:28.020485 5122 scope.go:117] "RemoveContainer" containerID="55499ceb4eb2c858cacc2bc04a0660f8aa8d33bb44c49a4583a9f94f85983434" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.519935 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.897427 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902170 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902194 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902180 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902265 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902550 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902790 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.902917 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.903695 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-llc2t\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.904142 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.910108 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.915588 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956103 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956167 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp2j\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-kube-api-access-lxp2j\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956332 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956490 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956614 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956640 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956791 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956853 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956912 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-tls-assets\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956939 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config-out\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.956993 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:29 crc kubenswrapper[5122]: I0224 00:31:29.957185 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-web-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059148 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059231 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp2j\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-kube-api-access-lxp2j\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059323 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059393 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059471 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059513 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059626 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059688 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059735 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-tls-assets\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059783 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config-out\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059829 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.059931 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-web-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.060353 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.060595 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.061241 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: E0224 00:31:30.061650 5122 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 24 00:31:30 crc kubenswrapper[5122]: E0224 00:31:30.061757 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls podName:cea3bfb1-bbc1-4d71-b7df-7b8070e46908 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:30.561736642 +0000 UTC m=+1357.651191165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "cea3bfb1-bbc1-4d71-b7df-7b8070e46908") : secret "default-prometheus-proxy-tls" not found Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.062746 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.065768 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-tls-assets\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.067311 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-web-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.067362 5122 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.067399 5122 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6728a5686754ff6cc1c406732b5e1a0e31efcbc9bd24bdc4f6e6b741b92fb60b/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.068030 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config-out\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.069487 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.076935 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-config\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.077363 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp2j\" (UniqueName: \"kubernetes.io/projected/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-kube-api-access-lxp2j\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.096999 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-268b491c-e5b6-41b4-9974-3bf577ddc550\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: I0224 00:31:30.567931 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:30 crc kubenswrapper[5122]: E0224 00:31:30.568131 5122 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 24 00:31:30 crc kubenswrapper[5122]: E0224 00:31:30.568398 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls podName:cea3bfb1-bbc1-4d71-b7df-7b8070e46908 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:31.568374949 +0000 UTC m=+1358.657829472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "cea3bfb1-bbc1-4d71-b7df-7b8070e46908") : secret "default-prometheus-proxy-tls" not found Feb 24 00:31:31 crc kubenswrapper[5122]: I0224 00:31:31.582709 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:31 crc kubenswrapper[5122]: I0224 00:31:31.593026 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/cea3bfb1-bbc1-4d71-b7df-7b8070e46908-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"cea3bfb1-bbc1-4d71-b7df-7b8070e46908\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:31:31 crc kubenswrapper[5122]: I0224 00:31:31.723535 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:32 crc kubenswrapper[5122]: I0224 00:31:32.190731 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:31:32 crc kubenswrapper[5122]: W0224 00:31:32.194956 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcea3bfb1_bbc1_4d71_b7df_7b8070e46908.slice/crio-4092d10702fc8c8e3361d4f446fa19bfe532dc83b3f95dca8cda1fd977b13c71 WatchSource:0}: Error finding container 4092d10702fc8c8e3361d4f446fa19bfe532dc83b3f95dca8cda1fd977b13c71: Status 404 returned error can't find the container with id 4092d10702fc8c8e3361d4f446fa19bfe532dc83b3f95dca8cda1fd977b13c71 Feb 24 00:31:33 crc kubenswrapper[5122]: I0224 00:31:33.073258 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerStarted","Data":"4092d10702fc8c8e3361d4f446fa19bfe532dc83b3f95dca8cda1fd977b13c71"} Feb 24 00:31:37 crc kubenswrapper[5122]: I0224 00:31:37.114699 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerStarted","Data":"75d22fd6b2562650a08149578bcf53c024e5c0d611305ce1540cde413eebe745"} Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.417512 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-fmgnk"] Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.438652 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.479135 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-fmgnk"] Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.600982 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6gf\" (UniqueName: \"kubernetes.io/projected/3a27adb0-983c-4b44-bdcb-ff3240f167aa-kube-api-access-mf6gf\") pod \"default-snmp-webhook-694dc457d5-fmgnk\" (UID: \"3a27adb0-983c-4b44-bdcb-ff3240f167aa\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.701937 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6gf\" (UniqueName: \"kubernetes.io/projected/3a27adb0-983c-4b44-bdcb-ff3240f167aa-kube-api-access-mf6gf\") pod \"default-snmp-webhook-694dc457d5-fmgnk\" (UID: \"3a27adb0-983c-4b44-bdcb-ff3240f167aa\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.723424 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6gf\" (UniqueName: \"kubernetes.io/projected/3a27adb0-983c-4b44-bdcb-ff3240f167aa-kube-api-access-mf6gf\") pod \"default-snmp-webhook-694dc457d5-fmgnk\" (UID: \"3a27adb0-983c-4b44-bdcb-ff3240f167aa\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.768227 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" Feb 24 00:31:39 crc kubenswrapper[5122]: I0224 00:31:39.970856 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-fmgnk"] Feb 24 00:31:40 crc kubenswrapper[5122]: I0224 00:31:40.142523 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" event={"ID":"3a27adb0-983c-4b44-bdcb-ff3240f167aa","Type":"ContainerStarted","Data":"9807f87eb695c4965aed7ff87fa358f143c5538fe2e4aa742d5394a9a49aa218"} Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.386024 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.428668 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.431964 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.432280 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.432451 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.432697 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-rm4xh\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.432889 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.433007 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.433099 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559136 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0636d679-99a3-489d-af03-9ffc23b48526\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0636d679-99a3-489d-af03-9ffc23b48526\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559593 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559658 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-web-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559705 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-tls-assets\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559728 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.559890 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-out\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.560000 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8brj\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-kube-api-access-n8brj\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.560053 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-volume\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.560153 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.661054 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-volume\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.661383 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.661586 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-0636d679-99a3-489d-af03-9ffc23b48526\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0636d679-99a3-489d-af03-9ffc23b48526\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.661640 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.661862 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-web-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: E0224 00:31:43.661989 5122 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:43 crc kubenswrapper[5122]: E0224 00:31:43.662112 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls podName:12d38c4a-59e9-4209-89e2-0f2c5d2730ce nodeName:}" failed. No retries permitted until 2026-02-24 00:31:44.162085483 +0000 UTC m=+1371.251539996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "12d38c4a-59e9-4209-89e2-0f2c5d2730ce") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.662177 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-tls-assets\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.662240 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.662452 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-out\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.662519 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n8brj\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-kube-api-access-n8brj\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.667416 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-out\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.675214 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-web-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.675454 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-tls-assets\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.675638 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-config-volume\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.676019 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.686249 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.687292 5122 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.687341 5122 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-0636d679-99a3-489d-af03-9ffc23b48526\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0636d679-99a3-489d-af03-9ffc23b48526\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2583e14d489e509288d91634dbcf4dbf36608965877125e762ce33e2424acf20/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.690129 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8brj\" (UniqueName: \"kubernetes.io/projected/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-kube-api-access-n8brj\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:43 crc kubenswrapper[5122]: I0224 00:31:43.725482 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-0636d679-99a3-489d-af03-9ffc23b48526\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0636d679-99a3-489d-af03-9ffc23b48526\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:44 crc kubenswrapper[5122]: I0224 00:31:44.169201 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:44 crc kubenswrapper[5122]: E0224 00:31:44.169342 5122 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:44 crc kubenswrapper[5122]: E0224 00:31:44.169648 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls podName:12d38c4a-59e9-4209-89e2-0f2c5d2730ce nodeName:}" failed. No retries permitted until 2026-02-24 00:31:45.169628684 +0000 UTC m=+1372.259083197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "12d38c4a-59e9-4209-89e2-0f2c5d2730ce") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:45 crc kubenswrapper[5122]: I0224 00:31:45.176565 5122 generic.go:358] "Generic (PLEG): container finished" podID="cea3bfb1-bbc1-4d71-b7df-7b8070e46908" containerID="75d22fd6b2562650a08149578bcf53c024e5c0d611305ce1540cde413eebe745" exitCode=0 Feb 24 00:31:45 crc kubenswrapper[5122]: I0224 00:31:45.176656 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerDied","Data":"75d22fd6b2562650a08149578bcf53c024e5c0d611305ce1540cde413eebe745"} Feb 24 00:31:45 crc kubenswrapper[5122]: I0224 00:31:45.184224 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:45 crc kubenswrapper[5122]: E0224 00:31:45.184435 5122 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:45 crc kubenswrapper[5122]: E0224 00:31:45.184492 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls podName:12d38c4a-59e9-4209-89e2-0f2c5d2730ce nodeName:}" failed. No retries permitted until 2026-02-24 00:31:47.184473578 +0000 UTC m=+1374.273928091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "12d38c4a-59e9-4209-89e2-0f2c5d2730ce") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:31:47 crc kubenswrapper[5122]: I0224 00:31:47.214034 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:47 crc kubenswrapper[5122]: I0224 00:31:47.222252 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/12d38c4a-59e9-4209-89e2-0f2c5d2730ce-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"12d38c4a-59e9-4209-89e2-0f2c5d2730ce\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:47 crc kubenswrapper[5122]: I0224 00:31:47.358365 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 24 00:31:47 crc kubenswrapper[5122]: I0224 00:31:47.845535 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:31:47 crc kubenswrapper[5122]: W0224 00:31:47.853344 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12d38c4a_59e9_4209_89e2_0f2c5d2730ce.slice/crio-caa301163ab7c673a0a0a61b5f3a1fddb9bf71cf7a604ed3dab0b62faaf99133 WatchSource:0}: Error finding container caa301163ab7c673a0a0a61b5f3a1fddb9bf71cf7a604ed3dab0b62faaf99133: Status 404 returned error can't find the container with id caa301163ab7c673a0a0a61b5f3a1fddb9bf71cf7a604ed3dab0b62faaf99133 Feb 24 00:31:48 crc kubenswrapper[5122]: I0224 00:31:48.203940 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerStarted","Data":"caa301163ab7c673a0a0a61b5f3a1fddb9bf71cf7a604ed3dab0b62faaf99133"} Feb 24 00:31:48 crc kubenswrapper[5122]: I0224 00:31:48.205670 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" event={"ID":"3a27adb0-983c-4b44-bdcb-ff3240f167aa","Type":"ContainerStarted","Data":"7008e326a10ff1ec695a94a7250096d894cb81d9d6b9f3bb1c3f9839ef7a7f1b"} Feb 24 00:31:48 crc kubenswrapper[5122]: I0224 00:31:48.232656 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-fmgnk" podStartSLOduration=2.098355137 podStartE2EDuration="9.232626179s" podCreationTimestamp="2026-02-24 00:31:39 +0000 UTC" firstStartedPulling="2026-02-24 00:31:39.979959006 +0000 UTC m=+1367.069413519" lastFinishedPulling="2026-02-24 00:31:47.114230048 +0000 UTC m=+1374.203684561" observedRunningTime="2026-02-24 00:31:48.219806203 +0000 UTC m=+1375.309260716" watchObservedRunningTime="2026-02-24 00:31:48.232626179 +0000 UTC m=+1375.322080712" Feb 24 00:31:50 crc kubenswrapper[5122]: I0224 00:31:50.219973 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerStarted","Data":"42ef0c51ca91e2b01a0edd664e63c078f4054fe2898ae36d8fe1745e4484b532"} Feb 24 00:31:51 crc kubenswrapper[5122]: I0224 00:31:51.228334 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerStarted","Data":"1af37b3d57f41e9f8caab2280159be7740e71ee02f632eb47b9b639c76b7c618"} Feb 24 00:31:54 crc kubenswrapper[5122]: I0224 00:31:54.255849 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerStarted","Data":"f25ad61b28cbf29cedb56fccd042022e7bd20cff2e3a713c9d64beac2ff31ffe"} Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.270823 5122 generic.go:358] "Generic (PLEG): container finished" podID="12d38c4a-59e9-4209-89e2-0f2c5d2730ce" containerID="42ef0c51ca91e2b01a0edd664e63c078f4054fe2898ae36d8fe1745e4484b532" exitCode=0 Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.270923 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerDied","Data":"42ef0c51ca91e2b01a0edd664e63c078f4054fe2898ae36d8fe1745e4484b532"} Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.592421 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9"] Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.899673 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9"] Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.900100 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.903044 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.903568 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-wlr7m\"" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.903621 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.903815 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.969053 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhmj\" (UniqueName: \"kubernetes.io/projected/3029a88d-f6e5-4969-937b-a2b09c89d9ba-kube-api-access-vdhmj\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.969153 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3029a88d-f6e5-4969-937b-a2b09c89d9ba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.969273 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3029a88d-f6e5-4969-937b-a2b09c89d9ba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.969319 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:56 crc kubenswrapper[5122]: I0224 00:31:56.969355 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.070840 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhmj\" (UniqueName: \"kubernetes.io/projected/3029a88d-f6e5-4969-937b-a2b09c89d9ba-kube-api-access-vdhmj\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.070888 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3029a88d-f6e5-4969-937b-a2b09c89d9ba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.070929 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3029a88d-f6e5-4969-937b-a2b09c89d9ba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.070960 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.070987 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: E0224 00:31:57.071147 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:57 crc kubenswrapper[5122]: E0224 00:31:57.071229 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls podName:3029a88d-f6e5-4969-937b-a2b09c89d9ba nodeName:}" failed. No retries permitted until 2026-02-24 00:31:57.571208318 +0000 UTC m=+1384.660662831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" (UID: "3029a88d-f6e5-4969-937b-a2b09c89d9ba") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.071405 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3029a88d-f6e5-4969-937b-a2b09c89d9ba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.072010 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3029a88d-f6e5-4969-937b-a2b09c89d9ba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.077932 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.088932 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhmj\" (UniqueName: \"kubernetes.io/projected/3029a88d-f6e5-4969-937b-a2b09c89d9ba-kube-api-access-vdhmj\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: I0224 00:31:57.577700 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:57 crc kubenswrapper[5122]: E0224 00:31:57.577884 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:57 crc kubenswrapper[5122]: E0224 00:31:57.578202 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls podName:3029a88d-f6e5-4969-937b-a2b09c89d9ba nodeName:}" failed. No retries permitted until 2026-02-24 00:31:58.578176364 +0000 UTC m=+1385.667630877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" (UID: "3029a88d-f6e5-4969-937b-a2b09c89d9ba") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:58 crc kubenswrapper[5122]: I0224 00:31:58.596361 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:58 crc kubenswrapper[5122]: I0224 00:31:58.600819 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3029a88d-f6e5-4969-937b-a2b09c89d9ba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9\" (UID: \"3029a88d-f6e5-4969-937b-a2b09c89d9ba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:58 crc kubenswrapper[5122]: I0224 00:31:58.716101 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.358384 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8"] Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.368091 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8"] Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.368217 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.370038 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.370908 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.405247 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e826ca33-c11c-4f9c-b71f-592d039c2ab1-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.405288 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.405324 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e826ca33-c11c-4f9c-b71f-592d039c2ab1-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.405397 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.405415 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlmj\" (UniqueName: \"kubernetes.io/projected/e826ca33-c11c-4f9c-b71f-592d039c2ab1-kube-api-access-dtlmj\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.507171 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.507240 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e826ca33-c11c-4f9c-b71f-592d039c2ab1-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: E0224 00:31:59.507369 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:59 crc kubenswrapper[5122]: E0224 00:31:59.507457 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls podName:e826ca33-c11c-4f9c-b71f-592d039c2ab1 nodeName:}" failed. No retries permitted until 2026-02-24 00:32:00.00743876 +0000 UTC m=+1387.096893273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" (UID: "e826ca33-c11c-4f9c-b71f-592d039c2ab1") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.508376 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e826ca33-c11c-4f9c-b71f-592d039c2ab1-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.508556 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.508591 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlmj\" (UniqueName: \"kubernetes.io/projected/e826ca33-c11c-4f9c-b71f-592d039c2ab1-kube-api-access-dtlmj\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.512423 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e826ca33-c11c-4f9c-b71f-592d039c2ab1-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.512849 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e826ca33-c11c-4f9c-b71f-592d039c2ab1-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.516349 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:31:59 crc kubenswrapper[5122]: I0224 00:31:59.525873 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlmj\" (UniqueName: \"kubernetes.io/projected/e826ca33-c11c-4f9c-b71f-592d039c2ab1-kube-api-access-dtlmj\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.020372 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:32:00 crc kubenswrapper[5122]: E0224 00:32:00.020544 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:32:00 crc kubenswrapper[5122]: E0224 00:32:00.020647 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls podName:e826ca33-c11c-4f9c-b71f-592d039c2ab1 nodeName:}" failed. No retries permitted until 2026-02-24 00:32:01.020610619 +0000 UTC m=+1388.110065132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" (UID: "e826ca33-c11c-4f9c-b71f-592d039c2ab1") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.124689 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531552-hghbn"] Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.153018 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531552-hghbn"] Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.153217 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.155843 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.155909 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.157019 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.222415 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq95z\" (UniqueName: \"kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z\") pod \"auto-csr-approver-29531552-hghbn\" (UID: \"fea4ee8e-ad0d-42a7-81a6-9471ee82df19\") " pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.325835 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pq95z\" (UniqueName: \"kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z\") pod \"auto-csr-approver-29531552-hghbn\" (UID: \"fea4ee8e-ad0d-42a7-81a6-9471ee82df19\") " pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.350230 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq95z\" (UniqueName: \"kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z\") pod \"auto-csr-approver-29531552-hghbn\" (UID: \"fea4ee8e-ad0d-42a7-81a6-9471ee82df19\") " pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:00 crc kubenswrapper[5122]: I0224 00:32:00.474019 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.039313 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.044747 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e826ca33-c11c-4f9c-b71f-592d039c2ab1-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8\" (UID: \"e826ca33-c11c-4f9c-b71f-592d039c2ab1\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.194386 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.431859 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531552-hghbn"] Feb 24 00:32:01 crc kubenswrapper[5122]: W0224 00:32:01.445120 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfea4ee8e_ad0d_42a7_81a6_9471ee82df19.slice/crio-9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2 WatchSource:0}: Error finding container 9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2: Status 404 returned error can't find the container with id 9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2 Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.574749 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9"] Feb 24 00:32:01 crc kubenswrapper[5122]: I0224 00:32:01.670495 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8"] Feb 24 00:32:01 crc kubenswrapper[5122]: W0224 00:32:01.781601 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode826ca33_c11c_4f9c_b71f_592d039c2ab1.slice/crio-e8351060f3532f6182ada470a725e4fef04a098d2d9f5651fb1426a7321d0baa WatchSource:0}: Error finding container e8351060f3532f6182ada470a725e4fef04a098d2d9f5651fb1426a7321d0baa: Status 404 returned error can't find the container with id e8351060f3532f6182ada470a725e4fef04a098d2d9f5651fb1426a7321d0baa Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.314733 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"e8351060f3532f6182ada470a725e4fef04a098d2d9f5651fb1426a7321d0baa"} Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.315994 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531552-hghbn" event={"ID":"fea4ee8e-ad0d-42a7-81a6-9471ee82df19","Type":"ContainerStarted","Data":"9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2"} Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.318709 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"cea3bfb1-bbc1-4d71-b7df-7b8070e46908","Type":"ContainerStarted","Data":"c75513fcf02f94941d6f471cc6073ce2331ee63f30be44957dfa010834be3c49"} Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.320866 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"b141111ee08726ff1d8e3dec3208c6003b6b852bbcfdef449e157d09bbd7b39a"} Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.343944 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.769562676 podStartE2EDuration="34.34392864s" podCreationTimestamp="2026-02-24 00:31:28 +0000 UTC" firstStartedPulling="2026-02-24 00:31:32.198953382 +0000 UTC m=+1359.288407905" lastFinishedPulling="2026-02-24 00:32:01.773319356 +0000 UTC m=+1388.862773869" observedRunningTime="2026-02-24 00:32:02.343124539 +0000 UTC m=+1389.432579072" watchObservedRunningTime="2026-02-24 00:32:02.34392864 +0000 UTC m=+1389.433383153" Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.899880 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5"] Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.964064 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5"] Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.964246 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.965696 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 24 00:32:02 crc kubenswrapper[5122]: I0224 00:32:02.966493 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.069300 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.069344 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5699997a-df40-4c34-9e05-71f859b5e5a7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.069383 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svsp5\" (UniqueName: \"kubernetes.io/projected/5699997a-df40-4c34-9e05-71f859b5e5a7-kube-api-access-svsp5\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.069413 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.069442 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5699997a-df40-4c34-9e05-71f859b5e5a7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170453 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5699997a-df40-4c34-9e05-71f859b5e5a7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170548 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170582 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5699997a-df40-4c34-9e05-71f859b5e5a7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170618 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svsp5\" (UniqueName: \"kubernetes.io/projected/5699997a-df40-4c34-9e05-71f859b5e5a7-kube-api-access-svsp5\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170649 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: E0224 00:32:03.170754 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:32:03 crc kubenswrapper[5122]: E0224 00:32:03.170808 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls podName:5699997a-df40-4c34-9e05-71f859b5e5a7 nodeName:}" failed. No retries permitted until 2026-02-24 00:32:03.670789905 +0000 UTC m=+1390.760244418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" (UID: "5699997a-df40-4c34-9e05-71f859b5e5a7") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.170867 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5699997a-df40-4c34-9e05-71f859b5e5a7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.171636 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5699997a-df40-4c34-9e05-71f859b5e5a7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.178193 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.194118 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svsp5\" (UniqueName: \"kubernetes.io/projected/5699997a-df40-4c34-9e05-71f859b5e5a7-kube-api-access-svsp5\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: I0224 00:32:03.677883 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:03 crc kubenswrapper[5122]: E0224 00:32:03.678130 5122 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:32:03 crc kubenswrapper[5122]: E0224 00:32:03.678234 5122 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls podName:5699997a-df40-4c34-9e05-71f859b5e5a7 nodeName:}" failed. No retries permitted until 2026-02-24 00:32:04.678210703 +0000 UTC m=+1391.767665296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" (UID: "5699997a-df40-4c34-9e05-71f859b5e5a7") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:32:04 crc kubenswrapper[5122]: I0224 00:32:04.338412 5122 generic.go:358] "Generic (PLEG): container finished" podID="fea4ee8e-ad0d-42a7-81a6-9471ee82df19" containerID="1166bd241f43f97d7f8df7f15d55716f18b2a41ad62bdbe2a1976f65b96ef260" exitCode=0 Feb 24 00:32:04 crc kubenswrapper[5122]: I0224 00:32:04.338530 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531552-hghbn" event={"ID":"fea4ee8e-ad0d-42a7-81a6-9471ee82df19","Type":"ContainerDied","Data":"1166bd241f43f97d7f8df7f15d55716f18b2a41ad62bdbe2a1976f65b96ef260"} Feb 24 00:32:04 crc kubenswrapper[5122]: I0224 00:32:04.696823 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:04 crc kubenswrapper[5122]: I0224 00:32:04.702573 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5699997a-df40-4c34-9e05-71f859b5e5a7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5\" (UID: \"5699997a-df40-4c34-9e05-71f859b5e5a7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:04 crc kubenswrapper[5122]: I0224 00:32:04.778972 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" Feb 24 00:32:05 crc kubenswrapper[5122]: I0224 00:32:05.614105 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:05 crc kubenswrapper[5122]: I0224 00:32:05.717642 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq95z\" (UniqueName: \"kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z\") pod \"fea4ee8e-ad0d-42a7-81a6-9471ee82df19\" (UID: \"fea4ee8e-ad0d-42a7-81a6-9471ee82df19\") " Feb 24 00:32:05 crc kubenswrapper[5122]: I0224 00:32:05.726065 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z" (OuterVolumeSpecName: "kube-api-access-pq95z") pod "fea4ee8e-ad0d-42a7-81a6-9471ee82df19" (UID: "fea4ee8e-ad0d-42a7-81a6-9471ee82df19"). InnerVolumeSpecName "kube-api-access-pq95z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:32:05 crc kubenswrapper[5122]: I0224 00:32:05.797237 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5"] Feb 24 00:32:05 crc kubenswrapper[5122]: I0224 00:32:05.822402 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pq95z\" (UniqueName: \"kubernetes.io/projected/fea4ee8e-ad0d-42a7-81a6-9471ee82df19-kube-api-access-pq95z\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.364117 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"b4ba51ce39611a49b72e50abc337c12c7e227db97aecc9e5d256cd70fc4a7066"} Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.369594 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531552-hghbn" event={"ID":"fea4ee8e-ad0d-42a7-81a6-9471ee82df19","Type":"ContainerDied","Data":"9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2"} Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.369653 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdb260184cf2cebadbf8ba9ad66b37f2217348702a828ee7ab56660565d0fe2" Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.369651 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531552-hghbn" Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.372460 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"4f5dcabaa04557534180a1649f505121a217ec169d0b79f005516aee9fc2e9e4"} Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.379303 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerStarted","Data":"2fb242dc2ce53da40518c63b6b9e7a0654b4d7b7f8ec67d2ecf0151a5c69d6a5"} Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.380912 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"489c1ecfdd227bf01c9289ff51599e225b0f271de45f5727bd1ab15c658c7974"} Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.674510 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531546-8lk88"] Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.679303 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531546-8lk88"] Feb 24 00:32:06 crc kubenswrapper[5122]: I0224 00:32:06.724274 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 24 00:32:07 crc kubenswrapper[5122]: I0224 00:32:07.396124 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"33ebf819378d14fd76fe0e2a99d9acbef9c2ae41ec7ddb093076afa349fd242b"} Feb 24 00:32:07 crc kubenswrapper[5122]: I0224 00:32:07.398273 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"a14a978e05e1821848d00e4d0cc123f46b703b947ac4c935fd3845c64303cd69"} Feb 24 00:32:07 crc kubenswrapper[5122]: I0224 00:32:07.398317 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"2935a8ab843bde58694c8ff4f71b2be6b631e6629b1c77e9f9f218fd34d26ea0"} Feb 24 00:32:07 crc kubenswrapper[5122]: I0224 00:32:07.401436 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"6abc7b145cd25a75f4193fe00d1a8d3644f83756c18072284200f48a4807ecac"} Feb 24 00:32:07 crc kubenswrapper[5122]: I0224 00:32:07.784112 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90820f7b-8662-452d-9559-6505af1ff0f3" path="/var/lib/kubelet/pods/90820f7b-8662-452d-9559-6505af1ff0f3/volumes" Feb 24 00:32:08 crc kubenswrapper[5122]: I0224 00:32:08.416720 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerStarted","Data":"d05fe62a2ba0954f43cf0a7273e4c91c3da678bf38cccab5caf782d009a0626c"} Feb 24 00:32:08 crc kubenswrapper[5122]: I0224 00:32:08.416775 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"12d38c4a-59e9-4209-89e2-0f2c5d2730ce","Type":"ContainerStarted","Data":"d320eb8f0495bec9278441bbe8e4097b2a79a930153346eebffc1cb9dcba70ed"} Feb 24 00:32:08 crc kubenswrapper[5122]: I0224 00:32:08.443762 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=14.519510919 podStartE2EDuration="26.443740123s" podCreationTimestamp="2026-02-24 00:31:42 +0000 UTC" firstStartedPulling="2026-02-24 00:31:56.27321207 +0000 UTC m=+1383.362666583" lastFinishedPulling="2026-02-24 00:32:08.197441274 +0000 UTC m=+1395.286895787" observedRunningTime="2026-02-24 00:32:08.439337598 +0000 UTC m=+1395.528792121" watchObservedRunningTime="2026-02-24 00:32:08.443740123 +0000 UTC m=+1395.533194636" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.560028 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc"] Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.561472 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fea4ee8e-ad0d-42a7-81a6-9471ee82df19" containerName="oc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.561504 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea4ee8e-ad0d-42a7-81a6-9471ee82df19" containerName="oc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.561734 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="fea4ee8e-ad0d-42a7-81a6-9471ee82df19" containerName="oc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.722780 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc"] Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.722921 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.725150 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.727116 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.800124 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/96dd63c5-de4b-4410-b007-da974ebb4e0e-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.800189 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/96dd63c5-de4b-4410-b007-da974ebb4e0e-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.800266 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7g2s\" (UniqueName: \"kubernetes.io/projected/96dd63c5-de4b-4410-b007-da974ebb4e0e-kube-api-access-p7g2s\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.800296 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/96dd63c5-de4b-4410-b007-da974ebb4e0e-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.902410 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/96dd63c5-de4b-4410-b007-da974ebb4e0e-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.902512 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/96dd63c5-de4b-4410-b007-da974ebb4e0e-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.902648 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7g2s\" (UniqueName: \"kubernetes.io/projected/96dd63c5-de4b-4410-b007-da974ebb4e0e-kube-api-access-p7g2s\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.902711 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/96dd63c5-de4b-4410-b007-da974ebb4e0e-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.903473 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/96dd63c5-de4b-4410-b007-da974ebb4e0e-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.903528 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/96dd63c5-de4b-4410-b007-da974ebb4e0e-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.913247 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/96dd63c5-de4b-4410-b007-da974ebb4e0e-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:10 crc kubenswrapper[5122]: I0224 00:32:10.925151 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7g2s\" (UniqueName: \"kubernetes.io/projected/96dd63c5-de4b-4410-b007-da974ebb4e0e-kube-api-access-p7g2s\") pod \"default-cloud1-coll-event-smartgateway-89d47b45c-k5khc\" (UID: \"96dd63c5-de4b-4410-b007-da974ebb4e0e\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.050908 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.407348 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf"] Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.416351 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.418708 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.418770 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf"] Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.511575 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.511617 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vjv\" (UniqueName: \"kubernetes.io/projected/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-kube-api-access-97vjv\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.511660 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.511690 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.612609 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.612663 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-97vjv\" (UniqueName: \"kubernetes.io/projected/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-kube-api-access-97vjv\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.612722 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.612769 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.613457 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.613623 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.626223 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.629516 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vjv\" (UniqueName: \"kubernetes.io/projected/33ccb427-9fc9-4980-bebf-a48b7cdad5ba-kube-api-access-97vjv\") pod \"default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf\" (UID: \"33ccb427-9fc9-4980-bebf-a48b7cdad5ba\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:11 crc kubenswrapper[5122]: I0224 00:32:11.734538 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" Feb 24 00:32:16 crc kubenswrapper[5122]: I0224 00:32:16.724460 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 24 00:32:16 crc kubenswrapper[5122]: I0224 00:32:16.760155 5122 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 24 00:32:17 crc kubenswrapper[5122]: I0224 00:32:17.533660 5122 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 24 00:32:19 crc kubenswrapper[5122]: I0224 00:32:19.806593 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf"] Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.081590 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc"] Feb 24 00:32:20 crc kubenswrapper[5122]: W0224 00:32:20.112168 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96dd63c5_de4b_4410_b007_da974ebb4e0e.slice/crio-7179c89085aabdf8acc0001d2e67b79fdd538474bfb053b3da26b118e0a3e0d2 WatchSource:0}: Error finding container 7179c89085aabdf8acc0001d2e67b79fdd538474bfb053b3da26b118e0a3e0d2: Status 404 returned error can't find the container with id 7179c89085aabdf8acc0001d2e67b79fdd538474bfb053b3da26b118e0a3e0d2 Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.505442 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"3830d6f0f83b7f98912459e27917e03dfad3837c9713b4956ba4221a0fc93c50"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.508726 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"8a6aa112604887cd616a87debb0cb3048f6e52dc3b37cbf3949335930d13b827"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.511057 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"6d2ca4a215168e7ca2a1050ce7df11d3ab9c07c67895da21aeb8ead8367e41ec"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.512150 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerStarted","Data":"80adffa1eac48ac7080622cb5ff405fb7d437e9caf566e378720a8fd4c50e8af"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.512179 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerStarted","Data":"0c4591c8bf43478442eb4829173c9cd4f324b688978101d58e294bbe7099c294"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.513456 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerStarted","Data":"d3ceb429e92c0554118dc73489ae7c31aec310afb90a3ea32359c04c191c4c9f"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.513480 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerStarted","Data":"7179c89085aabdf8acc0001d2e67b79fdd538474bfb053b3da26b118e0a3e0d2"} Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.522633 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" podStartSLOduration=3.120040489 podStartE2EDuration="21.522616312s" podCreationTimestamp="2026-02-24 00:31:59 +0000 UTC" firstStartedPulling="2026-02-24 00:32:01.785098145 +0000 UTC m=+1388.874552658" lastFinishedPulling="2026-02-24 00:32:20.187673958 +0000 UTC m=+1407.277128481" observedRunningTime="2026-02-24 00:32:20.521736859 +0000 UTC m=+1407.611191402" watchObservedRunningTime="2026-02-24 00:32:20.522616312 +0000 UTC m=+1407.612070835" Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.600780 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" podStartSLOduration=4.065428661 podStartE2EDuration="18.600761622s" podCreationTimestamp="2026-02-24 00:32:02 +0000 UTC" firstStartedPulling="2026-02-24 00:32:05.790148901 +0000 UTC m=+1392.879603414" lastFinishedPulling="2026-02-24 00:32:20.325481862 +0000 UTC m=+1407.414936375" observedRunningTime="2026-02-24 00:32:20.594920859 +0000 UTC m=+1407.684375392" watchObservedRunningTime="2026-02-24 00:32:20.600761622 +0000 UTC m=+1407.690216135" Feb 24 00:32:20 crc kubenswrapper[5122]: I0224 00:32:20.630598 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" podStartSLOduration=6.246792324 podStartE2EDuration="24.630579584s" podCreationTimestamp="2026-02-24 00:31:56 +0000 UTC" firstStartedPulling="2026-02-24 00:32:01.783348569 +0000 UTC m=+1388.872803082" lastFinishedPulling="2026-02-24 00:32:20.167135789 +0000 UTC m=+1407.256590342" observedRunningTime="2026-02-24 00:32:20.62165978 +0000 UTC m=+1407.711114313" watchObservedRunningTime="2026-02-24 00:32:20.630579584 +0000 UTC m=+1407.720034097" Feb 24 00:32:21 crc kubenswrapper[5122]: I0224 00:32:21.522536 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerStarted","Data":"5527209c092720c9aae3b34db5091465a95bf943e3877f1f357c65dc331e373c"} Feb 24 00:32:21 crc kubenswrapper[5122]: I0224 00:32:21.524765 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerStarted","Data":"276a8c31aa85ee1fc1de9c41db7f3274565d40358f2a4c81699c2ed5ffdf38cd"} Feb 24 00:32:21 crc kubenswrapper[5122]: I0224 00:32:21.544195 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" podStartSLOduration=9.852709649 podStartE2EDuration="10.544173593s" podCreationTimestamp="2026-02-24 00:32:11 +0000 UTC" firstStartedPulling="2026-02-24 00:32:19.813032943 +0000 UTC m=+1406.902487456" lastFinishedPulling="2026-02-24 00:32:20.504496897 +0000 UTC m=+1407.593951400" observedRunningTime="2026-02-24 00:32:21.542486759 +0000 UTC m=+1408.631941282" watchObservedRunningTime="2026-02-24 00:32:21.544173593 +0000 UTC m=+1408.633628146" Feb 24 00:32:21 crc kubenswrapper[5122]: I0224 00:32:21.578769 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" podStartSLOduration=11.188585287 podStartE2EDuration="11.5787483s" podCreationTimestamp="2026-02-24 00:32:10 +0000 UTC" firstStartedPulling="2026-02-24 00:32:20.115534176 +0000 UTC m=+1407.204988689" lastFinishedPulling="2026-02-24 00:32:20.505697189 +0000 UTC m=+1407.595151702" observedRunningTime="2026-02-24 00:32:21.572232939 +0000 UTC m=+1408.661687462" watchObservedRunningTime="2026-02-24 00:32:21.5787483 +0000 UTC m=+1408.668202803" Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.481386 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.481925 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" podUID="d6050c59-cb83-4be0-9710-f1739d8f457f" containerName="default-interconnect" containerID="cri-o://b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3" gracePeriod=30 Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.864943 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.908421 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-dn7wc"] Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.909444 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6050c59-cb83-4be0-9710-f1739d8f457f" containerName="default-interconnect" Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.909716 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6050c59-cb83-4be0-9710-f1739d8f457f" containerName="default-interconnect" Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.910046 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6050c59-cb83-4be0-9710-f1739d8f457f" containerName="default-interconnect" Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.915845 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-dn7wc"] Feb 24 00:32:23 crc kubenswrapper[5122]: I0224 00:32:23.915963 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003124 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003294 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003380 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003400 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003571 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003642 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003687 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ftj\" (UniqueName: \"kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj\") pod \"d6050c59-cb83-4be0-9710-f1739d8f457f\" (UID: \"d6050c59-cb83-4be0-9710-f1739d8f457f\") " Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.003887 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004142 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004386 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004490 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004579 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4d5r\" (UniqueName: \"kubernetes.io/projected/e338f3b5-0567-4c6e-962f-46f0e80dc52a-kube-api-access-m4d5r\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004632 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-users\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004657 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-config\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.004741 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.005128 5122 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.008440 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.008797 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.010780 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.011357 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.012177 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj" (OuterVolumeSpecName: "kube-api-access-w6ftj") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "kube-api-access-w6ftj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.017783 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "d6050c59-cb83-4be0-9710-f1739d8f457f" (UID: "d6050c59-cb83-4be0-9710-f1739d8f457f"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106363 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106487 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106549 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106630 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4d5r\" (UniqueName: \"kubernetes.io/projected/e338f3b5-0567-4c6e-962f-46f0e80dc52a-kube-api-access-m4d5r\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106705 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-users\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106742 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-config\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106823 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106919 5122 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106943 5122 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106963 5122 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.106983 5122 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.107002 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6ftj\" (UniqueName: \"kubernetes.io/projected/d6050c59-cb83-4be0-9710-f1739d8f457f-kube-api-access-w6ftj\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.108051 5122 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d6050c59-cb83-4be0-9710-f1739d8f457f-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.108330 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-config\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.110667 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.110681 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-sasl-users\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.110925 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.111395 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.111943 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e338f3b5-0567-4c6e-962f-46f0e80dc52a-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.130867 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4d5r\" (UniqueName: \"kubernetes.io/projected/e338f3b5-0567-4c6e-962f-46f0e80dc52a-kube-api-access-m4d5r\") pod \"default-interconnect-55bf8d5cb-dn7wc\" (UID: \"e338f3b5-0567-4c6e-962f-46f0e80dc52a\") " pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.238345 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" Feb 24 00:32:24 crc kubenswrapper[5122]: W0224 00:32:24.450517 5122 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode338f3b5_0567_4c6e_962f_46f0e80dc52a.slice/crio-164dc715dc1503b4e695c664ed95f7488469ababe0d127036741e906e89a216e WatchSource:0}: Error finding container 164dc715dc1503b4e695c664ed95f7488469ababe0d127036741e906e89a216e: Status 404 returned error can't find the container with id 164dc715dc1503b4e695c664ed95f7488469ababe0d127036741e906e89a216e Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.456552 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-dn7wc"] Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.554898 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerDied","Data":"33ebf819378d14fd76fe0e2a99d9acbef9c2ae41ec7ddb093076afa349fd242b"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.554915 5122 generic.go:358] "Generic (PLEG): container finished" podID="e826ca33-c11c-4f9c-b71f-592d039c2ab1" containerID="33ebf819378d14fd76fe0e2a99d9acbef9c2ae41ec7ddb093076afa349fd242b" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.555742 5122 scope.go:117] "RemoveContainer" containerID="33ebf819378d14fd76fe0e2a99d9acbef9c2ae41ec7ddb093076afa349fd242b" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.556709 5122 generic.go:358] "Generic (PLEG): container finished" podID="d6050c59-cb83-4be0-9710-f1739d8f457f" containerID="b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.556750 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" event={"ID":"d6050c59-cb83-4be0-9710-f1739d8f457f","Type":"ContainerDied","Data":"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.556973 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" event={"ID":"d6050c59-cb83-4be0-9710-f1739d8f457f","Type":"ContainerDied","Data":"fb9b37514dc71503edcd2270cca7f6e9ebbe9ec66207341e725ac03b828b37f5"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.556992 5122 scope.go:117] "RemoveContainer" containerID="b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.557146 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6pksx" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.561990 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" event={"ID":"e338f3b5-0567-4c6e-962f-46f0e80dc52a","Type":"ContainerStarted","Data":"164dc715dc1503b4e695c664ed95f7488469ababe0d127036741e906e89a216e"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.570236 5122 generic.go:358] "Generic (PLEG): container finished" podID="5699997a-df40-4c34-9e05-71f859b5e5a7" containerID="a14a978e05e1821848d00e4d0cc123f46b703b947ac4c935fd3845c64303cd69" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.570402 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerDied","Data":"a14a978e05e1821848d00e4d0cc123f46b703b947ac4c935fd3845c64303cd69"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.570996 5122 scope.go:117] "RemoveContainer" containerID="a14a978e05e1821848d00e4d0cc123f46b703b947ac4c935fd3845c64303cd69" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.584193 5122 scope.go:117] "RemoveContainer" containerID="b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3" Feb 24 00:32:24 crc kubenswrapper[5122]: E0224 00:32:24.587253 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3\": container with ID starting with b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3 not found: ID does not exist" containerID="b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.587299 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3"} err="failed to get container status \"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3\": rpc error: code = NotFound desc = could not find container \"b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3\": container with ID starting with b8c895ef6c4a427ed35349235a0f7a4bf22d56493076646e716301a2e52acda3 not found: ID does not exist" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.613682 5122 generic.go:358] "Generic (PLEG): container finished" podID="3029a88d-f6e5-4969-937b-a2b09c89d9ba" containerID="6abc7b145cd25a75f4193fe00d1a8d3644f83756c18072284200f48a4807ecac" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.614108 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerDied","Data":"6abc7b145cd25a75f4193fe00d1a8d3644f83756c18072284200f48a4807ecac"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.615278 5122 scope.go:117] "RemoveContainer" containerID="6abc7b145cd25a75f4193fe00d1a8d3644f83756c18072284200f48a4807ecac" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.628313 5122 generic.go:358] "Generic (PLEG): container finished" podID="33ccb427-9fc9-4980-bebf-a48b7cdad5ba" containerID="80adffa1eac48ac7080622cb5ff405fb7d437e9caf566e378720a8fd4c50e8af" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.628617 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerDied","Data":"80adffa1eac48ac7080622cb5ff405fb7d437e9caf566e378720a8fd4c50e8af"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.632583 5122 scope.go:117] "RemoveContainer" containerID="80adffa1eac48ac7080622cb5ff405fb7d437e9caf566e378720a8fd4c50e8af" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.645812 5122 generic.go:358] "Generic (PLEG): container finished" podID="96dd63c5-de4b-4410-b007-da974ebb4e0e" containerID="d3ceb429e92c0554118dc73489ae7c31aec310afb90a3ea32359c04c191c4c9f" exitCode=0 Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.645971 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerDied","Data":"d3ceb429e92c0554118dc73489ae7c31aec310afb90a3ea32359c04c191c4c9f"} Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.646584 5122 scope.go:117] "RemoveContainer" containerID="d3ceb429e92c0554118dc73489ae7c31aec310afb90a3ea32359c04c191c4c9f" Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.729140 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:32:24 crc kubenswrapper[5122]: I0224 00:32:24.735339 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6pksx"] Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.655959 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"f12c3508377f44578706367e371e141c0e52d0ac1d3021a21730378741ccbabd"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.668116 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" event={"ID":"e338f3b5-0567-4c6e-962f-46f0e80dc52a","Type":"ContainerStarted","Data":"2e1466d05d926b1109a4337c5209c02900f7315a9ff63cb11b843c75a8c96376"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.672914 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"05935acb7a344f410717263e63d8d681ae26af6ede87e3e20225c10fcabac2f6"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.682274 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"3998e0cc0e0ca0ccbfb1982deb6bba51120860f6e36872c2b77f0a8e617b9b68"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.686368 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerStarted","Data":"f07359c2350bf27d1cb4b19be345d42365560f691cc45a17fdc53972a9dae66a"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.692787 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerStarted","Data":"4c325fc9dbb9e60ca5e183c3bb901f0e4b0244fcbb56c00f1d7cdf8f57a99aca"} Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.723708 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-dn7wc" podStartSLOduration=2.723685894 podStartE2EDuration="2.723685894s" podCreationTimestamp="2026-02-24 00:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:32:25.722145634 +0000 UTC m=+1412.811600167" watchObservedRunningTime="2026-02-24 00:32:25.723685894 +0000 UTC m=+1412.813140427" Feb 24 00:32:25 crc kubenswrapper[5122]: I0224 00:32:25.797232 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6050c59-cb83-4be0-9710-f1739d8f457f" path="/var/lib/kubelet/pods/d6050c59-cb83-4be0-9710-f1739d8f457f/volumes" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.702107 5122 generic.go:358] "Generic (PLEG): container finished" podID="e826ca33-c11c-4f9c-b71f-592d039c2ab1" containerID="f12c3508377f44578706367e371e141c0e52d0ac1d3021a21730378741ccbabd" exitCode=0 Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.702175 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerDied","Data":"f12c3508377f44578706367e371e141c0e52d0ac1d3021a21730378741ccbabd"} Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.703138 5122 scope.go:117] "RemoveContainer" containerID="33ebf819378d14fd76fe0e2a99d9acbef9c2ae41ec7ddb093076afa349fd242b" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.703733 5122 scope.go:117] "RemoveContainer" containerID="f12c3508377f44578706367e371e141c0e52d0ac1d3021a21730378741ccbabd" Feb 24 00:32:26 crc kubenswrapper[5122]: E0224 00:32:26.704253 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8_service-telemetry(e826ca33-c11c-4f9c-b71f-592d039c2ab1)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" podUID="e826ca33-c11c-4f9c-b71f-592d039c2ab1" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.706796 5122 generic.go:358] "Generic (PLEG): container finished" podID="5699997a-df40-4c34-9e05-71f859b5e5a7" containerID="05935acb7a344f410717263e63d8d681ae26af6ede87e3e20225c10fcabac2f6" exitCode=0 Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.706886 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerDied","Data":"05935acb7a344f410717263e63d8d681ae26af6ede87e3e20225c10fcabac2f6"} Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.707321 5122 scope.go:117] "RemoveContainer" containerID="05935acb7a344f410717263e63d8d681ae26af6ede87e3e20225c10fcabac2f6" Feb 24 00:32:26 crc kubenswrapper[5122]: E0224 00:32:26.707536 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5_service-telemetry(5699997a-df40-4c34-9e05-71f859b5e5a7)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" podUID="5699997a-df40-4c34-9e05-71f859b5e5a7" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.710656 5122 generic.go:358] "Generic (PLEG): container finished" podID="3029a88d-f6e5-4969-937b-a2b09c89d9ba" containerID="3998e0cc0e0ca0ccbfb1982deb6bba51120860f6e36872c2b77f0a8e617b9b68" exitCode=0 Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.710738 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerDied","Data":"3998e0cc0e0ca0ccbfb1982deb6bba51120860f6e36872c2b77f0a8e617b9b68"} Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.711256 5122 scope.go:117] "RemoveContainer" containerID="3998e0cc0e0ca0ccbfb1982deb6bba51120860f6e36872c2b77f0a8e617b9b68" Feb 24 00:32:26 crc kubenswrapper[5122]: E0224 00:32:26.711526 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9_service-telemetry(3029a88d-f6e5-4969-937b-a2b09c89d9ba)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" podUID="3029a88d-f6e5-4969-937b-a2b09c89d9ba" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.718753 5122 generic.go:358] "Generic (PLEG): container finished" podID="33ccb427-9fc9-4980-bebf-a48b7cdad5ba" containerID="f07359c2350bf27d1cb4b19be345d42365560f691cc45a17fdc53972a9dae66a" exitCode=0 Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.718928 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerDied","Data":"f07359c2350bf27d1cb4b19be345d42365560f691cc45a17fdc53972a9dae66a"} Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.719479 5122 scope.go:117] "RemoveContainer" containerID="f07359c2350bf27d1cb4b19be345d42365560f691cc45a17fdc53972a9dae66a" Feb 24 00:32:26 crc kubenswrapper[5122]: E0224 00:32:26.719930 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf_service-telemetry(33ccb427-9fc9-4980-bebf-a48b7cdad5ba)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" podUID="33ccb427-9fc9-4980-bebf-a48b7cdad5ba" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.729617 5122 generic.go:358] "Generic (PLEG): container finished" podID="96dd63c5-de4b-4410-b007-da974ebb4e0e" containerID="4c325fc9dbb9e60ca5e183c3bb901f0e4b0244fcbb56c00f1d7cdf8f57a99aca" exitCode=0 Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.729869 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerDied","Data":"4c325fc9dbb9e60ca5e183c3bb901f0e4b0244fcbb56c00f1d7cdf8f57a99aca"} Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.730175 5122 scope.go:117] "RemoveContainer" containerID="4c325fc9dbb9e60ca5e183c3bb901f0e4b0244fcbb56c00f1d7cdf8f57a99aca" Feb 24 00:32:26 crc kubenswrapper[5122]: E0224 00:32:26.730504 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-89d47b45c-k5khc_service-telemetry(96dd63c5-de4b-4410-b007-da974ebb4e0e)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" podUID="96dd63c5-de4b-4410-b007-da974ebb4e0e" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.804205 5122 scope.go:117] "RemoveContainer" containerID="a14a978e05e1821848d00e4d0cc123f46b703b947ac4c935fd3845c64303cd69" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.842860 5122 scope.go:117] "RemoveContainer" containerID="6abc7b145cd25a75f4193fe00d1a8d3644f83756c18072284200f48a4807ecac" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.881243 5122 scope.go:117] "RemoveContainer" containerID="80adffa1eac48ac7080622cb5ff405fb7d437e9caf566e378720a8fd4c50e8af" Feb 24 00:32:26 crc kubenswrapper[5122]: I0224 00:32:26.924143 5122 scope.go:117] "RemoveContainer" containerID="d3ceb429e92c0554118dc73489ae7c31aec310afb90a3ea32359c04c191c4c9f" Feb 24 00:32:37 crc kubenswrapper[5122]: I0224 00:32:37.776478 5122 scope.go:117] "RemoveContainer" containerID="4c325fc9dbb9e60ca5e183c3bb901f0e4b0244fcbb56c00f1d7cdf8f57a99aca" Feb 24 00:32:37 crc kubenswrapper[5122]: I0224 00:32:37.777692 5122 scope.go:117] "RemoveContainer" containerID="f07359c2350bf27d1cb4b19be345d42365560f691cc45a17fdc53972a9dae66a" Feb 24 00:32:38 crc kubenswrapper[5122]: I0224 00:32:38.774929 5122 scope.go:117] "RemoveContainer" containerID="f12c3508377f44578706367e371e141c0e52d0ac1d3021a21730378741ccbabd" Feb 24 00:32:38 crc kubenswrapper[5122]: I0224 00:32:38.775476 5122 scope.go:117] "RemoveContainer" containerID="05935acb7a344f410717263e63d8d681ae26af6ede87e3e20225c10fcabac2f6" Feb 24 00:32:38 crc kubenswrapper[5122]: I0224 00:32:38.848598 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf" event={"ID":"33ccb427-9fc9-4980-bebf-a48b7cdad5ba","Type":"ContainerStarted","Data":"606ed98259134f74b712e4ad784dadbbf3a23cd8efa53fa036ff644d784716bb"} Feb 24 00:32:38 crc kubenswrapper[5122]: I0224 00:32:38.856717 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-89d47b45c-k5khc" event={"ID":"96dd63c5-de4b-4410-b007-da974ebb4e0e","Type":"ContainerStarted","Data":"4165e6e9ba8070bcd676f91fca7be8f115fae3df287ea423ec09154f1b277f4c"} Feb 24 00:32:39 crc kubenswrapper[5122]: I0224 00:32:39.865757 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8" event={"ID":"e826ca33-c11c-4f9c-b71f-592d039c2ab1","Type":"ContainerStarted","Data":"ab33d08bfac91ddcf0366cb49315be1b7a5d920fcaea3feabed6680009bb9ecc"} Feb 24 00:32:39 crc kubenswrapper[5122]: I0224 00:32:39.868254 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5" event={"ID":"5699997a-df40-4c34-9e05-71f859b5e5a7","Type":"ContainerStarted","Data":"5ee81cd3ea2d901e0ee69c564d8b8a5856c1849e5a1aa5599148ace48126a7a3"} Feb 24 00:32:41 crc kubenswrapper[5122]: I0224 00:32:41.775427 5122 scope.go:117] "RemoveContainer" containerID="3998e0cc0e0ca0ccbfb1982deb6bba51120860f6e36872c2b77f0a8e617b9b68" Feb 24 00:32:42 crc kubenswrapper[5122]: I0224 00:32:42.910250 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9" event={"ID":"3029a88d-f6e5-4969-937b-a2b09c89d9ba","Type":"ContainerStarted","Data":"91c0fa7f68061311da3544c79a218737a3690f51793af2d5699372f9875bac53"} Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.643471 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.651813 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.651887 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.658600 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.658934 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.736976 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-qdr-test-config\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.737240 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzct\" (UniqueName: \"kubernetes.io/projected/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-kube-api-access-rmzct\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.737340 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.838685 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.839330 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-qdr-test-config\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.839399 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzct\" (UniqueName: \"kubernetes.io/projected/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-kube-api-access-rmzct\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.840109 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-qdr-test-config\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.845313 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.854270 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzct\" (UniqueName: \"kubernetes.io/projected/47eff4cb-d21f-4878-95c0-90de5a2dfc5d-kube-api-access-rmzct\") pod \"qdr-test\" (UID: \"47eff4cb-d21f-4878-95c0-90de5a2dfc5d\") " pod="service-telemetry/qdr-test" Feb 24 00:32:52 crc kubenswrapper[5122]: I0224 00:32:52.974169 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 24 00:32:53 crc kubenswrapper[5122]: I0224 00:32:53.168519 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:32:53 crc kubenswrapper[5122]: I0224 00:32:53.989549 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"47eff4cb-d21f-4878-95c0-90de5a2dfc5d","Type":"ContainerStarted","Data":"d3e09456d2117aef374992ffccce49151e9aa977a109327eb46f1cfd3459ddd7"} Feb 24 00:32:59 crc kubenswrapper[5122]: I0224 00:32:59.199587 5122 scope.go:117] "RemoveContainer" containerID="2737a005f21ffd2fb389d9e6b12e766a8e6e8aafe89c43b6b9b7cdae15b39a3a" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.054573 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"47eff4cb-d21f-4878-95c0-90de5a2dfc5d","Type":"ContainerStarted","Data":"7547952cc1e12471cd60427015bb5bb5f1706cf449b095275da5398542585e5e"} Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.078763 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.558756158 podStartE2EDuration="8.078735542s" podCreationTimestamp="2026-02-24 00:32:52 +0000 UTC" firstStartedPulling="2026-02-24 00:32:53.176613653 +0000 UTC m=+1440.266068166" lastFinishedPulling="2026-02-24 00:32:59.696593037 +0000 UTC m=+1446.786047550" observedRunningTime="2026-02-24 00:33:00.069432831 +0000 UTC m=+1447.158887384" watchObservedRunningTime="2026-02-24 00:33:00.078735542 +0000 UTC m=+1447.168190075" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.318975 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5prlr"] Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.326437 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5prlr"] Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.326563 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.330499 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.330506 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.330692 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.330715 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.331981 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.332108 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445232 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445340 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqml\" (UniqueName: \"kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445369 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445400 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445428 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445449 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.445486 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546518 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546597 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546654 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqml\" (UniqueName: \"kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546674 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546698 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546720 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.546738 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.547741 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.548015 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.548220 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.548398 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.548432 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.548797 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.571927 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqml\" (UniqueName: \"kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml\") pod \"stf-smoketest-smoke1-5prlr\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.638841 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.644171 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.655859 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.677419 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.749890 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcv5w\" (UniqueName: \"kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w\") pod \"curl\" (UID: \"f714baea-3449-4573-a97f-bc60f278c3a4\") " pod="service-telemetry/curl" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.852995 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pcv5w\" (UniqueName: \"kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w\") pod \"curl\" (UID: \"f714baea-3449-4573-a97f-bc60f278c3a4\") " pod="service-telemetry/curl" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.871769 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcv5w\" (UniqueName: \"kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w\") pod \"curl\" (UID: \"f714baea-3449-4573-a97f-bc60f278c3a4\") " pod="service-telemetry/curl" Feb 24 00:33:00 crc kubenswrapper[5122]: I0224 00:33:00.970482 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:33:01 crc kubenswrapper[5122]: I0224 00:33:01.086990 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5prlr"] Feb 24 00:33:01 crc kubenswrapper[5122]: I0224 00:33:01.159712 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 24 00:33:02 crc kubenswrapper[5122]: I0224 00:33:02.073208 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f714baea-3449-4573-a97f-bc60f278c3a4","Type":"ContainerStarted","Data":"485591c92a91e3e2dd6d43aeedaae0a821fd451fb95f4c7ec7548dca17bd5948"} Feb 24 00:33:02 crc kubenswrapper[5122]: I0224 00:33:02.074947 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerStarted","Data":"e9ee1fefee404dd3dc9759c0349bd8e4b49899551416efb10550c077e4dac0d2"} Feb 24 00:33:03 crc kubenswrapper[5122]: I0224 00:33:03.092777 5122 generic.go:358] "Generic (PLEG): container finished" podID="f714baea-3449-4573-a97f-bc60f278c3a4" containerID="b3123a8f57a79a8bbee83707d78b1eaf273806aeae5d32dc199346355d8d7d76" exitCode=0 Feb 24 00:33:03 crc kubenswrapper[5122]: I0224 00:33:03.093251 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f714baea-3449-4573-a97f-bc60f278c3a4","Type":"ContainerDied","Data":"b3123a8f57a79a8bbee83707d78b1eaf273806aeae5d32dc199346355d8d7d76"} Feb 24 00:33:05 crc kubenswrapper[5122]: I0224 00:33:05.889153 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.030323 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcv5w\" (UniqueName: \"kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w\") pod \"f714baea-3449-4573-a97f-bc60f278c3a4\" (UID: \"f714baea-3449-4573-a97f-bc60f278c3a4\") " Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.036549 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_f714baea-3449-4573-a97f-bc60f278c3a4/curl/0.log" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.050560 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w" (OuterVolumeSpecName: "kube-api-access-pcv5w") pod "f714baea-3449-4573-a97f-bc60f278c3a4" (UID: "f714baea-3449-4573-a97f-bc60f278c3a4"). InnerVolumeSpecName "kube-api-access-pcv5w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.117806 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f714baea-3449-4573-a97f-bc60f278c3a4","Type":"ContainerDied","Data":"485591c92a91e3e2dd6d43aeedaae0a821fd451fb95f4c7ec7548dca17bd5948"} Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.117854 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485591c92a91e3e2dd6d43aeedaae0a821fd451fb95f4c7ec7548dca17bd5948" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.117899 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.132785 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcv5w\" (UniqueName: \"kubernetes.io/projected/f714baea-3449-4573-a97f-bc60f278c3a4-kube-api-access-pcv5w\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:06 crc kubenswrapper[5122]: I0224 00:33:06.268037 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-fmgnk_3a27adb0-983c-4b44-bdcb-ff3240f167aa/prometheus-webhook-snmp/0.log" Feb 24 00:33:11 crc kubenswrapper[5122]: I0224 00:33:11.164000 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerStarted","Data":"7f4bce0d40cf34689cd9369ebe978e1d114383548573d89b120f67f17e468c64"} Feb 24 00:33:20 crc kubenswrapper[5122]: I0224 00:33:20.227385 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerStarted","Data":"f62404e107223fd9400a72f4cdebcac153ea2faec91b5494705683aad37062b5"} Feb 24 00:33:20 crc kubenswrapper[5122]: I0224 00:33:20.248654 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-5prlr" podStartSLOduration=1.520805324 podStartE2EDuration="20.248638879s" podCreationTimestamp="2026-02-24 00:33:00 +0000 UTC" firstStartedPulling="2026-02-24 00:33:01.102293618 +0000 UTC m=+1448.191748131" lastFinishedPulling="2026-02-24 00:33:19.830127153 +0000 UTC m=+1466.919581686" observedRunningTime="2026-02-24 00:33:20.245862227 +0000 UTC m=+1467.335316730" watchObservedRunningTime="2026-02-24 00:33:20.248638879 +0000 UTC m=+1467.338093392" Feb 24 00:33:27 crc kubenswrapper[5122]: I0224 00:33:27.115835 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:33:27 crc kubenswrapper[5122]: I0224 00:33:27.116434 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:33:36 crc kubenswrapper[5122]: I0224 00:33:36.396266 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-fmgnk_3a27adb0-983c-4b44-bdcb-ff3240f167aa/prometheus-webhook-snmp/0.log" Feb 24 00:33:45 crc kubenswrapper[5122]: I0224 00:33:45.431474 5122 generic.go:358] "Generic (PLEG): container finished" podID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerID="7f4bce0d40cf34689cd9369ebe978e1d114383548573d89b120f67f17e468c64" exitCode=0 Feb 24 00:33:45 crc kubenswrapper[5122]: I0224 00:33:45.431594 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerDied","Data":"7f4bce0d40cf34689cd9369ebe978e1d114383548573d89b120f67f17e468c64"} Feb 24 00:33:45 crc kubenswrapper[5122]: I0224 00:33:45.432757 5122 scope.go:117] "RemoveContainer" containerID="7f4bce0d40cf34689cd9369ebe978e1d114383548573d89b120f67f17e468c64" Feb 24 00:33:51 crc kubenswrapper[5122]: I0224 00:33:51.474117 5122 generic.go:358] "Generic (PLEG): container finished" podID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerID="f62404e107223fd9400a72f4cdebcac153ea2faec91b5494705683aad37062b5" exitCode=0 Feb 24 00:33:51 crc kubenswrapper[5122]: I0224 00:33:51.474222 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerDied","Data":"f62404e107223fd9400a72f4cdebcac153ea2faec91b5494705683aad37062b5"} Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.798937 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932119 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932170 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932305 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932344 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932367 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.932395 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.933199 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpqml\" (UniqueName: \"kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml\") pod \"a39daeab-83a5-4e1f-bb35-0b1da076413e\" (UID: \"a39daeab-83a5-4e1f-bb35-0b1da076413e\") " Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.940876 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml" (OuterVolumeSpecName: "kube-api-access-lpqml") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "kube-api-access-lpqml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.948921 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.951061 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.951612 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.951657 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.951797 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:52 crc kubenswrapper[5122]: I0224 00:33:52.952574 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "a39daeab-83a5-4e1f-bb35-0b1da076413e" (UID: "a39daeab-83a5-4e1f-bb35-0b1da076413e"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.034964 5122 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035291 5122 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035358 5122 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035420 5122 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035481 5122 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035538 5122 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/a39daeab-83a5-4e1f-bb35-0b1da076413e-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.035593 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lpqml\" (UniqueName: \"kubernetes.io/projected/a39daeab-83a5-4e1f-bb35-0b1da076413e-kube-api-access-lpqml\") on node \"crc\" DevicePath \"\"" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.493799 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5prlr" Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.493811 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5prlr" event={"ID":"a39daeab-83a5-4e1f-bb35-0b1da076413e","Type":"ContainerDied","Data":"e9ee1fefee404dd3dc9759c0349bd8e4b49899551416efb10550c077e4dac0d2"} Feb 24 00:33:53 crc kubenswrapper[5122]: I0224 00:33:53.493851 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ee1fefee404dd3dc9759c0349bd8e4b49899551416efb10550c077e4dac0d2" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.404238 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.410984 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jz28d_b5f97112-ba2a-46c0-a285-a845d2f96be9/kube-multus/0.log" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.412108 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.417656 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.717622 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-5prlr_a39daeab-83a5-4e1f-bb35-0b1da076413e/smoketest-collectd/0.log" Feb 24 00:33:54 crc kubenswrapper[5122]: I0224 00:33:54.964157 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-5prlr_a39daeab-83a5-4e1f-bb35-0b1da076413e/smoketest-ceilometer/0.log" Feb 24 00:33:55 crc kubenswrapper[5122]: I0224 00:33:55.222891 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-dn7wc_e338f3b5-0567-4c6e-962f-46f0e80dc52a/default-interconnect/0.log" Feb 24 00:33:55 crc kubenswrapper[5122]: I0224 00:33:55.454857 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9_3029a88d-f6e5-4969-937b-a2b09c89d9ba/bridge/2.log" Feb 24 00:33:55 crc kubenswrapper[5122]: I0224 00:33:55.664807 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-zhvh9_3029a88d-f6e5-4969-937b-a2b09c89d9ba/sg-core/0.log" Feb 24 00:33:55 crc kubenswrapper[5122]: I0224 00:33:55.907665 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-89d47b45c-k5khc_96dd63c5-de4b-4410-b007-da974ebb4e0e/bridge/2.log" Feb 24 00:33:56 crc kubenswrapper[5122]: I0224 00:33:56.126859 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-89d47b45c-k5khc_96dd63c5-de4b-4410-b007-da974ebb4e0e/sg-core/0.log" Feb 24 00:33:56 crc kubenswrapper[5122]: I0224 00:33:56.395100 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8_e826ca33-c11c-4f9c-b71f-592d039c2ab1/bridge/2.log" Feb 24 00:33:56 crc kubenswrapper[5122]: I0224 00:33:56.655475 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-d9bf8_e826ca33-c11c-4f9c-b71f-592d039c2ab1/sg-core/0.log" Feb 24 00:33:56 crc kubenswrapper[5122]: I0224 00:33:56.959670 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf_33ccb427-9fc9-4980-bebf-a48b7cdad5ba/bridge/2.log" Feb 24 00:33:57 crc kubenswrapper[5122]: I0224 00:33:57.116314 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:33:57 crc kubenswrapper[5122]: I0224 00:33:57.116383 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:33:57 crc kubenswrapper[5122]: I0224 00:33:57.216617 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-66c74bf4b8-l5ctf_33ccb427-9fc9-4980-bebf-a48b7cdad5ba/sg-core/0.log" Feb 24 00:33:57 crc kubenswrapper[5122]: I0224 00:33:57.535267 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5_5699997a-df40-4c34-9e05-71f859b5e5a7/bridge/2.log" Feb 24 00:33:57 crc kubenswrapper[5122]: I0224 00:33:57.778575 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kh7s5_5699997a-df40-4c34-9e05-71f859b5e5a7/sg-core/0.log" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.133036 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531554-q8nmd"] Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134224 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-collectd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134244 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-collectd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134278 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-ceilometer" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134286 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-ceilometer" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134321 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f714baea-3449-4573-a97f-bc60f278c3a4" containerName="curl" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134330 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714baea-3449-4573-a97f-bc60f278c3a4" containerName="curl" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134476 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-ceilometer" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134493 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="a39daeab-83a5-4e1f-bb35-0b1da076413e" containerName="smoketest-collectd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.134511 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="f714baea-3449-4573-a97f-bc60f278c3a4" containerName="curl" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.137908 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.173702 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.174677 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.174736 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.176396 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531554-q8nmd"] Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.238390 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w88lf\" (UniqueName: \"kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf\") pod \"auto-csr-approver-29531554-q8nmd\" (UID: \"29406c0f-e811-4099-aadb-4bdb7c26771a\") " pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.339501 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w88lf\" (UniqueName: \"kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf\") pod \"auto-csr-approver-29531554-q8nmd\" (UID: \"29406c0f-e811-4099-aadb-4bdb7c26771a\") " pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.359369 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w88lf\" (UniqueName: \"kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf\") pod \"auto-csr-approver-29531554-q8nmd\" (UID: \"29406c0f-e811-4099-aadb-4bdb7c26771a\") " pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.392450 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-544bbc9ddd-bx4hp_b8c9fe6c-982a-4162-aa02-7f783626920f/operator/0.log" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.497193 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.636408 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_cea3bfb1-bbc1-4d71-b7df-7b8070e46908/prometheus/0.log" Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.697189 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531554-q8nmd"] Feb 24 00:34:00 crc kubenswrapper[5122]: I0224 00:34:00.860401 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_40e78782-0cd0-484d-846c-a2b76a952ae4/elasticsearch/0.log" Feb 24 00:34:01 crc kubenswrapper[5122]: I0224 00:34:01.111854 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-fmgnk_3a27adb0-983c-4b44-bdcb-ff3240f167aa/prometheus-webhook-snmp/0.log" Feb 24 00:34:01 crc kubenswrapper[5122]: I0224 00:34:01.384233 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_12d38c4a-59e9-4209-89e2-0f2c5d2730ce/alertmanager/0.log" Feb 24 00:34:01 crc kubenswrapper[5122]: I0224 00:34:01.556799 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" event={"ID":"29406c0f-e811-4099-aadb-4bdb7c26771a","Type":"ContainerStarted","Data":"cddc14d51a1d4716f28d7e2e43d50020cd0e64755b4f68ac431943dd6ddae4ab"} Feb 24 00:34:02 crc kubenswrapper[5122]: I0224 00:34:02.565768 5122 generic.go:358] "Generic (PLEG): container finished" podID="29406c0f-e811-4099-aadb-4bdb7c26771a" containerID="b7462b51e6a0b8274ed7cb0a2da44bde23ab47d935c941149d05a314fe52af8e" exitCode=0 Feb 24 00:34:02 crc kubenswrapper[5122]: I0224 00:34:02.565909 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" event={"ID":"29406c0f-e811-4099-aadb-4bdb7c26771a","Type":"ContainerDied","Data":"b7462b51e6a0b8274ed7cb0a2da44bde23ab47d935c941149d05a314fe52af8e"} Feb 24 00:34:03 crc kubenswrapper[5122]: I0224 00:34:03.838682 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:03 crc kubenswrapper[5122]: I0224 00:34:03.892807 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w88lf\" (UniqueName: \"kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf\") pod \"29406c0f-e811-4099-aadb-4bdb7c26771a\" (UID: \"29406c0f-e811-4099-aadb-4bdb7c26771a\") " Feb 24 00:34:03 crc kubenswrapper[5122]: I0224 00:34:03.904065 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf" (OuterVolumeSpecName: "kube-api-access-w88lf") pod "29406c0f-e811-4099-aadb-4bdb7c26771a" (UID: "29406c0f-e811-4099-aadb-4bdb7c26771a"). InnerVolumeSpecName "kube-api-access-w88lf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:34:03 crc kubenswrapper[5122]: I0224 00:34:03.994749 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w88lf\" (UniqueName: \"kubernetes.io/projected/29406c0f-e811-4099-aadb-4bdb7c26771a-kube-api-access-w88lf\") on node \"crc\" DevicePath \"\"" Feb 24 00:34:04 crc kubenswrapper[5122]: I0224 00:34:04.587145 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" Feb 24 00:34:04 crc kubenswrapper[5122]: I0224 00:34:04.587161 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531554-q8nmd" event={"ID":"29406c0f-e811-4099-aadb-4bdb7c26771a","Type":"ContainerDied","Data":"cddc14d51a1d4716f28d7e2e43d50020cd0e64755b4f68ac431943dd6ddae4ab"} Feb 24 00:34:04 crc kubenswrapper[5122]: I0224 00:34:04.587213 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddc14d51a1d4716f28d7e2e43d50020cd0e64755b4f68ac431943dd6ddae4ab" Feb 24 00:34:04 crc kubenswrapper[5122]: I0224 00:34:04.919126 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531548-45r4c"] Feb 24 00:34:04 crc kubenswrapper[5122]: I0224 00:34:04.931255 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531548-45r4c"] Feb 24 00:34:05 crc kubenswrapper[5122]: I0224 00:34:05.793623 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e6643b-db72-41da-8438-593821a4ab0b" path="/var/lib/kubelet/pods/57e6643b-db72-41da-8438-593821a4ab0b/volumes" Feb 24 00:34:15 crc kubenswrapper[5122]: I0224 00:34:15.926309 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-8b8bc878d-z4qks_78992448-2ff2-4e0d-a5e9-4c8a2dd6ab0e/operator/0.log" Feb 24 00:34:18 crc kubenswrapper[5122]: I0224 00:34:18.863697 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-544bbc9ddd-bx4hp_b8c9fe6c-982a-4162-aa02-7f783626920f/operator/0.log" Feb 24 00:34:19 crc kubenswrapper[5122]: I0224 00:34:19.161226 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_47eff4cb-d21f-4878-95c0-90de5a2dfc5d/qdr/0.log" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.116416 5122 patch_prober.go:28] interesting pod/machine-config-daemon-mr2pp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.117036 5122 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.117138 5122 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.117895 5122 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3"} pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.117972 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerName="machine-config-daemon" containerID="cri-o://13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" gracePeriod=600 Feb 24 00:34:27 crc kubenswrapper[5122]: E0224 00:34:27.249983 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.787299 5122 generic.go:358] "Generic (PLEG): container finished" podID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" exitCode=0 Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.787417 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" event={"ID":"a07a0dd1-ea17-44c0-a92f-d51bc168c592","Type":"ContainerDied","Data":"13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3"} Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.787504 5122 scope.go:117] "RemoveContainer" containerID="11832d5408cd581df642868cc9e689ce6738c918addb34398621612d1d170a86" Feb 24 00:34:27 crc kubenswrapper[5122]: I0224 00:34:27.787986 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:34:27 crc kubenswrapper[5122]: E0224 00:34:27.788367 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:34:42 crc kubenswrapper[5122]: I0224 00:34:42.775683 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:34:42 crc kubenswrapper[5122]: E0224 00:34:42.776687 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.125053 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xg5nw/must-gather-86js8"] Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.126233 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29406c0f-e811-4099-aadb-4bdb7c26771a" containerName="oc" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.126258 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="29406c0f-e811-4099-aadb-4bdb7c26771a" containerName="oc" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.126416 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="29406c0f-e811-4099-aadb-4bdb7c26771a" containerName="oc" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.133378 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xg5nw/must-gather-86js8"] Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.133516 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.135399 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-xg5nw\"/\"openshift-service-ca.crt\"" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.136208 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-xg5nw\"/\"kube-root-ca.crt\"" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.141152 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-xg5nw\"/\"default-dockercfg-6442x\"" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.261363 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.261426 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p762t\" (UniqueName: \"kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.363195 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.363270 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p762t\" (UniqueName: \"kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.363718 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.381384 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p762t\" (UniqueName: \"kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t\") pod \"must-gather-86js8\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.459212 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.694992 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xg5nw/must-gather-86js8"] Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.701150 5122 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:34:44 crc kubenswrapper[5122]: I0224 00:34:44.947305 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xg5nw/must-gather-86js8" event={"ID":"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94","Type":"ContainerStarted","Data":"13f94866038e8adddd528a6ee5322d881d2439d469423ff6489ca661a83d3bb8"} Feb 24 00:34:52 crc kubenswrapper[5122]: I0224 00:34:52.004676 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xg5nw/must-gather-86js8" event={"ID":"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94","Type":"ContainerStarted","Data":"5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc"} Feb 24 00:34:52 crc kubenswrapper[5122]: I0224 00:34:52.005314 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xg5nw/must-gather-86js8" event={"ID":"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94","Type":"ContainerStarted","Data":"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e"} Feb 24 00:34:52 crc kubenswrapper[5122]: I0224 00:34:52.022431 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xg5nw/must-gather-86js8" podStartSLOduration=1.837196338 podStartE2EDuration="8.022406122s" podCreationTimestamp="2026-02-24 00:34:44 +0000 UTC" firstStartedPulling="2026-02-24 00:34:44.7014454 +0000 UTC m=+1551.790899913" lastFinishedPulling="2026-02-24 00:34:50.886655144 +0000 UTC m=+1557.976109697" observedRunningTime="2026-02-24 00:34:52.017563997 +0000 UTC m=+1559.107018540" watchObservedRunningTime="2026-02-24 00:34:52.022406122 +0000 UTC m=+1559.111860635" Feb 24 00:34:55 crc kubenswrapper[5122]: I0224 00:34:55.785851 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:34:55 crc kubenswrapper[5122]: E0224 00:34:55.786564 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:34:59 crc kubenswrapper[5122]: I0224 00:34:59.687547 5122 scope.go:117] "RemoveContainer" containerID="5c0d75325d102974dcca8748c78c4258b3569c693bdddf6fbcdb8760ec4b68a4" Feb 24 00:35:08 crc kubenswrapper[5122]: I0224 00:35:08.775731 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:35:08 crc kubenswrapper[5122]: E0224 00:35:08.776811 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:35:20 crc kubenswrapper[5122]: I0224 00:35:20.775136 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:35:20 crc kubenswrapper[5122]: E0224 00:35:20.776022 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:35:31 crc kubenswrapper[5122]: I0224 00:35:31.387104 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-2pxbg_f0657a36-859b-4454-8940-c1b68b1161c6/control-plane-machine-set-operator/0.log" Feb 24 00:35:31 crc kubenswrapper[5122]: I0224 00:35:31.521988 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-4frxv_8e88e04e-2e6c-45d3-97fe-d49d5fd9f480/kube-rbac-proxy/0.log" Feb 24 00:35:31 crc kubenswrapper[5122]: I0224 00:35:31.570158 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-4frxv_8e88e04e-2e6c-45d3-97fe-d49d5fd9f480/machine-api-operator/0.log" Feb 24 00:35:35 crc kubenswrapper[5122]: I0224 00:35:35.775125 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:35:35 crc kubenswrapper[5122]: E0224 00:35:35.775823 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:35:42 crc kubenswrapper[5122]: I0224 00:35:42.604746 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-pg9j6_290a7e41-6a78-42f2-822a-bf2edea66a56/cert-manager-controller/0.log" Feb 24 00:35:42 crc kubenswrapper[5122]: I0224 00:35:42.756215 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-zj7k5_95a05fce-6267-44f4-ad33-ca687ffaeb63/cert-manager-cainjector/0.log" Feb 24 00:35:42 crc kubenswrapper[5122]: I0224 00:35:42.780617 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-579tp_7361183c-a085-44ff-9f1f-b3d494a5296c/cert-manager-webhook/0.log" Feb 24 00:35:46 crc kubenswrapper[5122]: I0224 00:35:46.775495 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:35:46 crc kubenswrapper[5122]: E0224 00:35:46.776039 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:35:54 crc kubenswrapper[5122]: I0224 00:35:54.908509 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qgg49_e8237e2d-982e-4f5a-80ea-597aaebed4a1/prometheus-operator/0.log" Feb 24 00:35:55 crc kubenswrapper[5122]: I0224 00:35:55.055330 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt_bc26f0ac-7fb0-4223-a613-38006bf7ed17/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:55 crc kubenswrapper[5122]: I0224 00:35:55.071015 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68cc44d484-mkqml_97106b9b-a13e-4f80-8da1-ff7885b694b8/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:55 crc kubenswrapper[5122]: I0224 00:35:55.214985 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-6n2zd_ccd92ea8-69cf-470b-a538-07cf775804b2/operator/0.log" Feb 24 00:35:55 crc kubenswrapper[5122]: I0224 00:35:55.292616 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-lftx5_e000557b-7587-4dce-913f-5a9a064194ad/perses-operator/0.log" Feb 24 00:35:57 crc kubenswrapper[5122]: I0224 00:35:57.774528 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:35:57 crc kubenswrapper[5122]: E0224 00:35:57.775026 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.133220 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531556-j52qm"] Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.148282 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.151158 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.151453 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.153323 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.154947 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531556-j52qm"] Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.272294 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b66t4\" (UniqueName: \"kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4\") pod \"auto-csr-approver-29531556-j52qm\" (UID: \"6fc92605-ec27-4e11-8a98-69d8d8200f50\") " pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.374152 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b66t4\" (UniqueName: \"kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4\") pod \"auto-csr-approver-29531556-j52qm\" (UID: \"6fc92605-ec27-4e11-8a98-69d8d8200f50\") " pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.399836 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b66t4\" (UniqueName: \"kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4\") pod \"auto-csr-approver-29531556-j52qm\" (UID: \"6fc92605-ec27-4e11-8a98-69d8d8200f50\") " pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.467831 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:00 crc kubenswrapper[5122]: I0224 00:36:00.934836 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531556-j52qm"] Feb 24 00:36:01 crc kubenswrapper[5122]: I0224 00:36:01.513298 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531556-j52qm" event={"ID":"6fc92605-ec27-4e11-8a98-69d8d8200f50","Type":"ContainerStarted","Data":"22e5aafd12cee47aff9a91f81663980ac5a126d6711d87151f19553e968ec992"} Feb 24 00:36:02 crc kubenswrapper[5122]: I0224 00:36:02.523776 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531556-j52qm" event={"ID":"6fc92605-ec27-4e11-8a98-69d8d8200f50","Type":"ContainerStarted","Data":"cfcdc3598b67cdd1e1087fc0eda861b797b93b95baf4e8e387904904c4c14c87"} Feb 24 00:36:03 crc kubenswrapper[5122]: I0224 00:36:03.532887 5122 generic.go:358] "Generic (PLEG): container finished" podID="6fc92605-ec27-4e11-8a98-69d8d8200f50" containerID="cfcdc3598b67cdd1e1087fc0eda861b797b93b95baf4e8e387904904c4c14c87" exitCode=0 Feb 24 00:36:03 crc kubenswrapper[5122]: I0224 00:36:03.532981 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531556-j52qm" event={"ID":"6fc92605-ec27-4e11-8a98-69d8d8200f50","Type":"ContainerDied","Data":"cfcdc3598b67cdd1e1087fc0eda861b797b93b95baf4e8e387904904c4c14c87"} Feb 24 00:36:03 crc kubenswrapper[5122]: I0224 00:36:03.785266 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:03 crc kubenswrapper[5122]: I0224 00:36:03.921426 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b66t4\" (UniqueName: \"kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4\") pod \"6fc92605-ec27-4e11-8a98-69d8d8200f50\" (UID: \"6fc92605-ec27-4e11-8a98-69d8d8200f50\") " Feb 24 00:36:03 crc kubenswrapper[5122]: I0224 00:36:03.928239 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4" (OuterVolumeSpecName: "kube-api-access-b66t4") pod "6fc92605-ec27-4e11-8a98-69d8d8200f50" (UID: "6fc92605-ec27-4e11-8a98-69d8d8200f50"). InnerVolumeSpecName "kube-api-access-b66t4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.024724 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b66t4\" (UniqueName: \"kubernetes.io/projected/6fc92605-ec27-4e11-8a98-69d8d8200f50-kube-api-access-b66t4\") on node \"crc\" DevicePath \"\"" Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.542881 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531556-j52qm" Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.542909 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531556-j52qm" event={"ID":"6fc92605-ec27-4e11-8a98-69d8d8200f50","Type":"ContainerDied","Data":"22e5aafd12cee47aff9a91f81663980ac5a126d6711d87151f19553e968ec992"} Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.543583 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e5aafd12cee47aff9a91f81663980ac5a126d6711d87151f19553e968ec992" Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.855282 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531550-htvhc"] Feb 24 00:36:04 crc kubenswrapper[5122]: I0224 00:36:04.864654 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531550-htvhc"] Feb 24 00:36:05 crc kubenswrapper[5122]: I0224 00:36:05.784745 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810a6cb2-a1cd-4818-b267-e542e52ba120" path="/var/lib/kubelet/pods/810a6cb2-a1cd-4818-b267-e542e52ba120/volumes" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.379874 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/util/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.550760 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/pull/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.560488 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/util/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.562176 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/pull/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.711796 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/extract/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.727729 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/util/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.747292 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1nznpc_ce4ede87-e12f-4cba-bc09-e436e147fe31/pull/0.log" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.775168 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:36:08 crc kubenswrapper[5122]: E0224 00:36:08.775418 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:08 crc kubenswrapper[5122]: I0224 00:36:08.856297 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.053282 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.061468 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.086558 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.216702 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.216888 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.230955 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ft2cq7_4473ff86-fc0e-40e2-8698-19569caf6272/extract/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.376664 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.501763 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.507046 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.550796 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.668972 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.688388 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/pull/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.700752 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e524qxl_3dbd5f22-fa2e-4776-88e0-cdb5f255d8b9/extract/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.842208 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.983402 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/util/0.log" Feb 24 00:36:09 crc kubenswrapper[5122]: I0224 00:36:09.987200 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/pull/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.003694 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/pull/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.156829 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/util/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.176333 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/pull/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.231184 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljhnz_0718ee02-3adb-41a4-aff8-2e4778f60c2d/extract/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.301229 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-utilities/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.460025 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-content/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.465787 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-content/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.465800 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-utilities/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.648439 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-utilities/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.660923 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/extract-content/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.807945 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zxkvk_83b27a8c-4814-4cea-b395-be2e22807da6/registry-server/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.834025 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-utilities/0.log" Feb 24 00:36:10 crc kubenswrapper[5122]: I0224 00:36:10.989874 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-utilities/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.002118 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.006226 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.157399 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-utilities/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.158752 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.313676 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-j9nvj_64eca3ba-8cf7-44c8-9c06-302240cb10d9/marketplace-operator/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.335979 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-utilities/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.353211 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4t9rq_127102d8-1da1-4582-9512-75958969764b/registry-server/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.522557 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-utilities/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.546287 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.546602 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.690974 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-utilities/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.709276 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/extract-content/0.log" Feb 24 00:36:11 crc kubenswrapper[5122]: I0224 00:36:11.890354 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8cprn_120593d4-24fb-4884-9aa1-ba609c88f3c5/registry-server/0.log" Feb 24 00:36:20 crc kubenswrapper[5122]: I0224 00:36:20.774604 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:36:20 crc kubenswrapper[5122]: E0224 00:36:20.775360 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:23 crc kubenswrapper[5122]: I0224 00:36:23.016962 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68cc44d484-fp2nt_bc26f0ac-7fb0-4223-a613-38006bf7ed17/prometheus-operator-admission-webhook/0.log" Feb 24 00:36:23 crc kubenswrapper[5122]: I0224 00:36:23.048069 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-qgg49_e8237e2d-982e-4f5a-80ea-597aaebed4a1/prometheus-operator/0.log" Feb 24 00:36:23 crc kubenswrapper[5122]: I0224 00:36:23.087057 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68cc44d484-mkqml_97106b9b-a13e-4f80-8da1-ff7885b694b8/prometheus-operator-admission-webhook/0.log" Feb 24 00:36:23 crc kubenswrapper[5122]: I0224 00:36:23.144928 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-6n2zd_ccd92ea8-69cf-470b-a538-07cf775804b2/operator/0.log" Feb 24 00:36:23 crc kubenswrapper[5122]: I0224 00:36:23.195796 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-lftx5_e000557b-7587-4dce-913f-5a9a064194ad/perses-operator/0.log" Feb 24 00:36:33 crc kubenswrapper[5122]: I0224 00:36:33.783259 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:36:33 crc kubenswrapper[5122]: E0224 00:36:33.784272 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:46 crc kubenswrapper[5122]: I0224 00:36:46.775244 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:36:46 crc kubenswrapper[5122]: E0224 00:36:46.775811 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:59 crc kubenswrapper[5122]: I0224 00:36:59.776371 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:36:59 crc kubenswrapper[5122]: E0224 00:36:59.777509 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:36:59 crc kubenswrapper[5122]: I0224 00:36:59.830315 5122 scope.go:117] "RemoveContainer" containerID="b516ebc29295e4bd4af59901c52c95ecc2ba8e7ee0f6200c2782dc670797a8fe" Feb 24 00:37:04 crc kubenswrapper[5122]: I0224 00:37:04.037183 5122 generic.go:358] "Generic (PLEG): container finished" podID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerID="ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e" exitCode=0 Feb 24 00:37:04 crc kubenswrapper[5122]: I0224 00:37:04.037259 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xg5nw/must-gather-86js8" event={"ID":"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94","Type":"ContainerDied","Data":"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e"} Feb 24 00:37:04 crc kubenswrapper[5122]: I0224 00:37:04.038111 5122 scope.go:117] "RemoveContainer" containerID="ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e" Feb 24 00:37:04 crc kubenswrapper[5122]: I0224 00:37:04.714315 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xg5nw_must-gather-86js8_0a4f0206-6f28-4e30-a4a5-bb5f8874ca94/gather/0.log" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.011481 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xg5nw/must-gather-86js8"] Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.012311 5122 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-xg5nw/must-gather-86js8" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="copy" containerID="cri-o://5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc" gracePeriod=2 Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.015226 5122 status_manager.go:895] "Failed to get status for pod" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" pod="openshift-must-gather-xg5nw/must-gather-86js8" err="pods \"must-gather-86js8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-xg5nw\": no relationship found between node 'crc' and this object" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.025523 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xg5nw/must-gather-86js8"] Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.411006 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xg5nw_must-gather-86js8_0a4f0206-6f28-4e30-a4a5-bb5f8874ca94/copy/0.log" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.411650 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.413201 5122 status_manager.go:895] "Failed to get status for pod" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" pod="openshift-must-gather-xg5nw/must-gather-86js8" err="pods \"must-gather-86js8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-xg5nw\": no relationship found between node 'crc' and this object" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.573527 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output\") pod \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.573713 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p762t\" (UniqueName: \"kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t\") pod \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\" (UID: \"0a4f0206-6f28-4e30-a4a5-bb5f8874ca94\") " Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.579904 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t" (OuterVolumeSpecName: "kube-api-access-p762t") pod "0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" (UID: "0a4f0206-6f28-4e30-a4a5-bb5f8874ca94"). InnerVolumeSpecName "kube-api-access-p762t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.621800 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" (UID: "0a4f0206-6f28-4e30-a4a5-bb5f8874ca94"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.675144 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p762t\" (UniqueName: \"kubernetes.io/projected/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-kube-api-access-p762t\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.675180 5122 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:11 crc kubenswrapper[5122]: I0224 00:37:11.781576 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" path="/var/lib/kubelet/pods/0a4f0206-6f28-4e30-a4a5-bb5f8874ca94/volumes" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.101345 5122 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xg5nw_must-gather-86js8_0a4f0206-6f28-4e30-a4a5-bb5f8874ca94/copy/0.log" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.102660 5122 generic.go:358] "Generic (PLEG): container finished" podID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerID="5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc" exitCode=143 Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.102730 5122 scope.go:117] "RemoveContainer" containerID="5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.103010 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xg5nw/must-gather-86js8" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.118091 5122 scope.go:117] "RemoveContainer" containerID="ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.179154 5122 scope.go:117] "RemoveContainer" containerID="5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc" Feb 24 00:37:12 crc kubenswrapper[5122]: E0224 00:37:12.179536 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc\": container with ID starting with 5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc not found: ID does not exist" containerID="5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.179597 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc"} err="failed to get container status \"5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc\": rpc error: code = NotFound desc = could not find container \"5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc\": container with ID starting with 5ae64baaa9c683aff45982b345b549b391071bf6e1cc9761ec8a4073b2fae4fc not found: ID does not exist" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.179625 5122 scope.go:117] "RemoveContainer" containerID="ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e" Feb 24 00:37:12 crc kubenswrapper[5122]: E0224 00:37:12.179962 5122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e\": container with ID starting with ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e not found: ID does not exist" containerID="ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e" Feb 24 00:37:12 crc kubenswrapper[5122]: I0224 00:37:12.180104 5122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e"} err="failed to get container status \"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e\": rpc error: code = NotFound desc = could not find container \"ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e\": container with ID starting with ca800f44d69741fe434b57b1b2f6d4b79490351b520440fce0b5094441a5767e not found: ID does not exist" Feb 24 00:37:14 crc kubenswrapper[5122]: I0224 00:37:14.774666 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:37:14 crc kubenswrapper[5122]: E0224 00:37:14.775906 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:37:28 crc kubenswrapper[5122]: I0224 00:37:28.775024 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:37:28 crc kubenswrapper[5122]: E0224 00:37:28.775825 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:37:41 crc kubenswrapper[5122]: I0224 00:37:41.774815 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:37:41 crc kubenswrapper[5122]: E0224 00:37:41.775998 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:37:54 crc kubenswrapper[5122]: I0224 00:37:54.775505 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:37:54 crc kubenswrapper[5122]: E0224 00:37:54.777301 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.157139 5122 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29531558-br2cn"] Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160042 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fc92605-ec27-4e11-8a98-69d8d8200f50" containerName="oc" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160116 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc92605-ec27-4e11-8a98-69d8d8200f50" containerName="oc" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160167 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="gather" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160180 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="gather" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160234 5122 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="copy" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160247 5122 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="copy" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160501 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="copy" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160534 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a4f0206-6f28-4e30-a4a5-bb5f8874ca94" containerName="gather" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.160560 5122 memory_manager.go:356] "RemoveStaleState removing state" podUID="6fc92605-ec27-4e11-8a98-69d8d8200f50" containerName="oc" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.169539 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.172373 5122 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5z2v7\"" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.173675 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.176061 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531558-br2cn"] Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.181692 5122 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.313680 5122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdfq\" (UniqueName: \"kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq\") pod \"auto-csr-approver-29531558-br2cn\" (UID: \"87e91f8a-599d-4192-99d6-836675eee92c\") " pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.415457 5122 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdfq\" (UniqueName: \"kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq\") pod \"auto-csr-approver-29531558-br2cn\" (UID: \"87e91f8a-599d-4192-99d6-836675eee92c\") " pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.451285 5122 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdfq\" (UniqueName: \"kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq\") pod \"auto-csr-approver-29531558-br2cn\" (UID: \"87e91f8a-599d-4192-99d6-836675eee92c\") " pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.491619 5122 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:00 crc kubenswrapper[5122]: I0224 00:38:00.928891 5122 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29531558-br2cn"] Feb 24 00:38:01 crc kubenswrapper[5122]: I0224 00:38:01.534773 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531558-br2cn" event={"ID":"87e91f8a-599d-4192-99d6-836675eee92c","Type":"ContainerStarted","Data":"faddbd19d705acbcedd820d46c7c26b7227fae1084cb99bf72a876ed1344c040"} Feb 24 00:38:02 crc kubenswrapper[5122]: I0224 00:38:02.543014 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531558-br2cn" event={"ID":"87e91f8a-599d-4192-99d6-836675eee92c","Type":"ContainerStarted","Data":"c98d8cf98635d5fa35d27ad8bca52a1522cf4089c7678ec871198e482039e4da"} Feb 24 00:38:02 crc kubenswrapper[5122]: I0224 00:38:02.559033 5122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29531558-br2cn" podStartSLOduration=1.5893614999999999 podStartE2EDuration="2.559012224s" podCreationTimestamp="2026-02-24 00:38:00 +0000 UTC" firstStartedPulling="2026-02-24 00:38:00.953529382 +0000 UTC m=+1748.042983915" lastFinishedPulling="2026-02-24 00:38:01.923180116 +0000 UTC m=+1749.012634639" observedRunningTime="2026-02-24 00:38:02.554086463 +0000 UTC m=+1749.643540986" watchObservedRunningTime="2026-02-24 00:38:02.559012224 +0000 UTC m=+1749.648466737" Feb 24 00:38:03 crc kubenswrapper[5122]: I0224 00:38:03.552413 5122 generic.go:358] "Generic (PLEG): container finished" podID="87e91f8a-599d-4192-99d6-836675eee92c" containerID="c98d8cf98635d5fa35d27ad8bca52a1522cf4089c7678ec871198e482039e4da" exitCode=0 Feb 24 00:38:03 crc kubenswrapper[5122]: I0224 00:38:03.552460 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531558-br2cn" event={"ID":"87e91f8a-599d-4192-99d6-836675eee92c","Type":"ContainerDied","Data":"c98d8cf98635d5fa35d27ad8bca52a1522cf4089c7678ec871198e482039e4da"} Feb 24 00:38:04 crc kubenswrapper[5122]: I0224 00:38:04.835583 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:04 crc kubenswrapper[5122]: I0224 00:38:04.997044 5122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdfq\" (UniqueName: \"kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq\") pod \"87e91f8a-599d-4192-99d6-836675eee92c\" (UID: \"87e91f8a-599d-4192-99d6-836675eee92c\") " Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.003688 5122 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq" (OuterVolumeSpecName: "kube-api-access-7zdfq") pod "87e91f8a-599d-4192-99d6-836675eee92c" (UID: "87e91f8a-599d-4192-99d6-836675eee92c"). InnerVolumeSpecName "kube-api-access-7zdfq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.098926 5122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zdfq\" (UniqueName: \"kubernetes.io/projected/87e91f8a-599d-4192-99d6-836675eee92c-kube-api-access-7zdfq\") on node \"crc\" DevicePath \"\"" Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.580329 5122 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29531558-br2cn" event={"ID":"87e91f8a-599d-4192-99d6-836675eee92c","Type":"ContainerDied","Data":"faddbd19d705acbcedd820d46c7c26b7227fae1084cb99bf72a876ed1344c040"} Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.580381 5122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faddbd19d705acbcedd820d46c7c26b7227fae1084cb99bf72a876ed1344c040" Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.580557 5122 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29531558-br2cn" Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.621203 5122 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29531552-hghbn"] Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.630606 5122 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29531552-hghbn"] Feb 24 00:38:05 crc kubenswrapper[5122]: I0224 00:38:05.785239 5122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea4ee8e-ad0d-42a7-81a6-9471ee82df19" path="/var/lib/kubelet/pods/fea4ee8e-ad0d-42a7-81a6-9471ee82df19/volumes" Feb 24 00:38:06 crc kubenswrapper[5122]: I0224 00:38:06.774546 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:38:06 crc kubenswrapper[5122]: E0224 00:38:06.774912 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:38:19 crc kubenswrapper[5122]: I0224 00:38:19.774874 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:38:19 crc kubenswrapper[5122]: E0224 00:38:19.775726 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592" Feb 24 00:38:33 crc kubenswrapper[5122]: I0224 00:38:33.790042 5122 scope.go:117] "RemoveContainer" containerID="13f740d51ed25fa0b47d2a0f20ea349f794f8ba0ddb7e44badd07a5d62c7e5e3" Feb 24 00:38:33 crc kubenswrapper[5122]: E0224 00:38:33.791441 5122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mr2pp_openshift-machine-config-operator(a07a0dd1-ea17-44c0-a92f-d51bc168c592)\"" pod="openshift-machine-config-operator/machine-config-daemon-mr2pp" podUID="a07a0dd1-ea17-44c0-a92f-d51bc168c592"